Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Tim Bannister <tim@scalefactory.com>
Co-authored-by: Qiming Teng <tengqm@outlook.com>
  • Loading branch information
3 people authored Apr 8, 2022
1 parent 94bf646 commit 0d6f76e
Showing 1 changed file with 4 additions and 6 deletions.
10 changes: 4 additions & 6 deletions content/en/blog/_posts/2022-TBD-storage-capacity-GA/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,7 @@ slug: storage-capacity-ga

**Authors:** Patrick Ohly (Intel)

With ["storage capacity tracking"](/docs/concepts/storage/storage-capacity/)
promoted to GA in Kubernetes 1.24, support for local storage in Kubernetes has
reached its next milestone.
The v1.24 release of Kubernetes brings _storage capacity tracking_ as a generally available feature.

## Problems we are solving

Expand All @@ -21,7 +19,7 @@ for a Pod when that Pod has volumes that still need to be provisioned.

Without this information, a Pod may get stuck without ever being scheduled onto
a suitable nodes because kube-scheduler has to choose blindly and always ends
up picking a node for which the volume then cannot be provisioned because the
up picking a node for which the volume cannot be provisioned because the
CSI driver does not have sufficient storage left.

Because CSI drivers publish storage capacity information that gets used at a
Expand All @@ -38,7 +36,7 @@ stuck without it.

## Problems we are *not* solving

Recovery from a failed volume provisioning attempt has one known gap: if a Pod
Recovery from a failed volume provisioning attempt has one known limitation: if a Pod
uses two volumes and only one of them could be provisioned, then all future
scheduling decisions are limited by the already provisioned volume. If that
volume is local to a node and the other volume cannot be provisioned there, the
Expand All @@ -57,7 +55,7 @@ drivers with storage capacity tracking, a prototype was developed and discussed
in [a PR](https://github.com/kubernetes/autoscaler/pull/3887). It was meant to
work with arbitrary CSI drivers, but that flexibility made it hard to configure
and slowed down scale up operations: because autoscaler was unable to simulate
volume provisioning, it only increased the cluster one node at a time, which
volume provisioning, it only scaled the cluster by one node at a time, which
was seen as insufficient.

Therefore that PR was not merged and a different approach with tighter coupling
Expand Down

0 comments on commit 0d6f76e

Please sign in to comment.