From b554f41c2198c4f14d392fdf34188c2276c6076e Mon Sep 17 00:00:00 2001 From: "Rostislav M. Georgiev" Date: Thu, 11 Jun 2020 18:21:27 +0300 Subject: [PATCH 1/2] kubeadm: Document component config related changes This briefly adds a description for some side steps in the upgrade process. Most notably, it mentions the existance of the component config state table at the end of the `kubeadm upgrade plan` output and the need to specify a file with upgraded configs to `kubeadm upgrade apply` if the config state table says so. Signed-off-by: Rostislav M. Georgiev --- .../kubeadm/kubeadm-upgrade.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 2c4c3d135e41e..b38a3309804d1 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -142,9 +142,21 @@ Find the latest stable 1.18 version: kubeadm upgrade apply v1.18.0 _____________________________________________________________________ + + The table below shows the current state of component configs as understood by this version of kubeadm. + Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or + resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually + upgrade to is denoted in the "PREFERRED VERSION" column. + + API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED + kubeproxy.config.k8s.io v1alpha1 v1alpha1 no + kubelet.config.k8s.io v1beta1 v1beta1 no + _____________________________________________________________________ + ``` This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. + It also shows a table with the component config version states. {{< note >}} `kubeadm upgrade` also automatically renews the certificates that it manages on this node. @@ -152,6 +164,12 @@ To opt-out of certificate renewal the flag `--certificate-renewal=false` can be For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs). {{}} +{{< note >}} +If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide +a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag. +Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade. +{{}} + - Choose a version to upgrade to, and run the appropriate command. For example: ```shell @@ -430,6 +448,7 @@ and post-upgrade manifest file for a certain component, a backup file for it wil - The control plane is healthy - Enforces the version skew policies. - Makes sure the control plane images are available or available to pull to the machine. +- Generates replacements and/or uses user supplied overwrites if component configs require version upgrades. - Upgrades the control plane components or rollbacks if any of them fails to come up. - Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created. - Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. From 65e914e5e5e055aa09601e60c4da78c3fffb2c15 Mon Sep 17 00:00:00 2001 From: "Rostislav M. Georgiev" Date: Mon, 13 Jul 2020 19:38:47 +0300 Subject: [PATCH 2/2] Update the `kubeadm upgrade` page for 1.19 Signed-off-by: Rostislav M. Georgiev --- .../kubeadm/kubeadm-upgrade.md | 164 +++++++++--------- 1 file changed, 84 insertions(+), 80 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index b38a3309804d1..61c37d390b5fa 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -4,17 +4,18 @@ reviewers: title: Upgrading kubeadm clusters content_type: task weight: 20 -min-kubernetes-server-version: 1.18 +min-kubernetes-server-version: 1.19 --- This page explains how to upgrade a Kubernetes cluster created with kubeadm from version -1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`). +1.18.x to version 1.19.x, and from version 1.19.x to 1.19.y (where `y > x`). To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead: +- [Upgrading kubeadm cluster from 1.17 to 1.18](https://v1-18.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) @@ -31,7 +32,7 @@ The upgrade workflow at high level is the following: ## {{% heading "prerequisites" %}} -- You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later. +- You need to have a kubeadm Kubernetes cluster running version 1.18.0 or later. - [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). - The cluster should use a static control plane and etcd pods or external etcd. - Make sure you read the [release notes]({{< latest-release-notes >}}) carefully. @@ -51,19 +52,19 @@ The upgrade workflow at high level is the following: ## Determine which version to upgrade to -Find the latest stable 1.18 version: +Find the latest stable 1.19 version: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache madison kubeadm - # find the latest 1.18 version in the list - # it should look like 1.18.x-00, where x is the latest patch + # find the latest 1.19 version in the list + # it should look like 1.19.x-00, where x is the latest patch {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.18 version in the list - # it should look like 1.18.x-0, where x is the latest patch + # find the latest 1.19 version in the list + # it should look like 1.19.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}} @@ -75,18 +76,18 @@ Find the latest stable 1.18 version: {{< tabs name="k8s_install_kubeadm_first_cp" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.18.x-00 with the latest patch version + # replace x in 1.19.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.19.x-00 && \ apt-mark hold kubeadm - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 + apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.18.x-0 with the latest patch version - yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes + # replace x in 1.19.x-0 with the latest patch version + yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -118,28 +119,28 @@ Find the latest stable 1.18 version: [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.17.3 - [upgrade/versions] kubeadm version: v1.18.0 - [upgrade/versions] Latest stable version: v1.18.0 - [upgrade/versions] Latest version in the v1.17 series: v1.18.0 + [upgrade/versions] Cluster version: v1.18.4 + [upgrade/versions] kubeadm version: v1.19.0 + [upgrade/versions] Latest stable version: v1.19.0 + [upgrade/versions] Latest version in the v1.18 series: v1.18.4 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.17.3 v1.18.0 + Kubelet 1 x v1.18.4 v1.19.0 - Upgrade to the latest version in the v1.17 series: + Upgrade to the latest version in the v1.18 series: COMPONENT CURRENT AVAILABLE - API Server v1.17.3 v1.18.0 - Controller Manager v1.17.3 v1.18.0 - Scheduler v1.17.3 v1.18.0 - Kube Proxy v1.17.3 v1.18.0 - CoreDNS 1.6.5 1.6.7 - Etcd 3.4.3 3.4.3-0 + API Server v1.18.4 v1.19.0 + Controller Manager v1.18.4 v1.19.0 + Scheduler v1.18.4 v1.19.0 + Kube Proxy v1.18.4 v1.19.0 + CoreDNS 1.6.7 1.7.0 + Etcd 3.4.3-0 3.4.7-0 You can now apply the upgrade by executing the following command: - kubeadm upgrade apply v1.18.0 + kubeadm upgrade apply v1.19.0 _____________________________________________________________________ @@ -174,7 +175,7 @@ Failing to do so will cause `kubeadm upgrade apply` to exit with an error and no ```shell # replace x with the patch version you picked for this upgrade - sudo kubeadm upgrade apply v1.18.x + sudo kubeadm upgrade apply v1.19.x ``` @@ -186,75 +187,78 @@ Failing to do so will cause `kubeadm upgrade apply` to exit with an error and no [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks - [upgrade/version] You have chosen to change the cluster version to "v1.18.0" - [upgrade/versions] Cluster version: v1.17.3 - [upgrade/versions] kubeadm version: v1.18.0 + [upgrade/version] You have chosen to change the cluster version to "v1.19.0" + [upgrade/versions] Cluster version: v1.18.4 + [upgrade/versions] kubeadm version: v1.19.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] - [upgrade/prepull] Prepulling image for component etcd. - [upgrade/prepull] Prepulling image for component kube-apiserver. - [upgrade/prepull] Prepulling image for component kube-controller-manager. - [upgrade/prepull] Prepulling image for component kube-scheduler. - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [upgrade/prepull] Prepulled image for component etcd. - [upgrade/prepull] Prepulled image for component kube-apiserver. - [upgrade/prepull] Prepulled image for component kube-controller-manager. - [upgrade/prepull] Prepulled image for component kube-scheduler. - [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"... - Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 - Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 - Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 + [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster + [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection + [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"... + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2 + Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a [upgrade/etcd] Upgrading to TLS for etcd - [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012" - W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" + Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf + [upgrade/staticpods] Preparing for "etcd" upgrade + [upgrade/staticpods] Renewing etcd-server certificate + [upgrade/staticpods] Renewing etcd-peer certificate + [upgrade/staticpods] Renewing etcd-healthcheck-client certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/etcd.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf + Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf + Static pod: etcd-kind-control-plane hash: 59e40b2aab1cd7055e64450b5ee438f0 + [apiclient] Found 1 Pods for label selector component=etcd + [upgrade/staticpods] Component "etcd" upgraded successfully! + [upgrade/etcd] Waiting for etcd to become available + [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests999800980" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 - Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: f717874150ba572f020dcd89db8480fc [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 - Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156 + Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2 + Static pod: kube-controller-manager-kind-control-plane hash: b155b63c70e798b806e64a866e297dd0 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 - Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d + Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a + Static pod: kube-scheduler-kind-control-plane hash: 260018ac854dbf1c9fe82493e88aec31 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" + [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster + W0713 16:26:14.074656 2986 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained. [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy! + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` @@ -296,18 +300,18 @@ Upgrade the kubelet and kubectl on all control plane nodes: {{< tabs name="k8s_install_kubelet" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.18.x-00 with the latest patch version + # replace x in 1.19.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ + apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \ apt-mark hold kubelet kubectl - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 + apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.18.x-0 with the latest patch version - yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes + # replace x in 1.19.x-0 with the latest patch version + yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -329,18 +333,18 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.18.x-00 with the latest patch version + # replace x in 1.19.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.19.x-00 && \ apt-mark hold kubeadm - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 + apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.18.x-0 with the latest patch version - yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes + # replace x in 1.19.x-0 with the latest patch version + yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -375,18 +379,18 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_kubelet_and_kubectl" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.18.x-00 with the latest patch version + # replace x in 1.19.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ + apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \ apt-mark hold kubelet kubectl - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 + apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.18.x-0 with the latest patch version - yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes + # replace x in 1.19.x-0 with the latest patch version + yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}}