From 0d6f76ed75feedea9fe7016496c162a97260d1ac Mon Sep 17 00:00:00 2001 From: Patrick Ohly Date: Fri, 8 Apr 2022 17:05:22 +0200 Subject: [PATCH] Apply suggestions from code review Co-authored-by: Tim Bannister Co-authored-by: Qiming Teng --- .../blog/_posts/2022-TBD-storage-capacity-GA/index.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/content/en/blog/_posts/2022-TBD-storage-capacity-GA/index.md b/content/en/blog/_posts/2022-TBD-storage-capacity-GA/index.md index a7d8388900e00..5fbe53b11ab6a 100644 --- a/content/en/blog/_posts/2022-TBD-storage-capacity-GA/index.md +++ b/content/en/blog/_posts/2022-TBD-storage-capacity-GA/index.md @@ -7,9 +7,7 @@ slug: storage-capacity-ga **Authors:** Patrick Ohly (Intel) -With ["storage capacity tracking"](/docs/concepts/storage/storage-capacity/) -promoted to GA in Kubernetes 1.24, support for local storage in Kubernetes has -reached its next milestone. +The v1.24 release of Kubernetes brings _storage capacity tracking_ as a generally available feature. ## Problems we are solving @@ -21,7 +19,7 @@ for a Pod when that Pod has volumes that still need to be provisioned. Without this information, a Pod may get stuck without ever being scheduled onto a suitable nodes because kube-scheduler has to choose blindly and always ends -up picking a node for which the volume then cannot be provisioned because the +up picking a node for which the volume cannot be provisioned because the CSI driver does not have sufficient storage left. Because CSI drivers publish storage capacity information that gets used at a @@ -38,7 +36,7 @@ stuck without it. ## Problems we are *not* solving -Recovery from a failed volume provisioning attempt has one known gap: if a Pod +Recovery from a failed volume provisioning attempt has one known limitation: if a Pod uses two volumes and only one of them could be provisioned, then all future scheduling decisions are limited by the already provisioned volume. If that volume is local to a node and the other volume cannot be provisioned there, the @@ -57,7 +55,7 @@ drivers with storage capacity tracking, a prototype was developed and discussed in [a PR](https://github.com/kubernetes/autoscaler/pull/3887). It was meant to work with arbitrary CSI drivers, but that flexibility made it hard to configure and slowed down scale up operations: because autoscaler was unable to simulate -volume provisioning, it only increased the cluster one node at a time, which +volume provisioning, it only scaled the cluster by one node at a time, which was seen as insufficient. Therefore that PR was not merged and a different approach with tighter coupling