Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Decide on Default Storage Class for EKS Clusters #711

Merged
merged 30 commits into from
Oct 24, 2024
Merged
Changes from 27 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
f51ba78
Add decision guide for default storage class in EKS clusters
milldr Oct 21, 2024
39f52b3
Update default storage class recommendation
milldr Oct 21, 2024
9d27ce8
Decide on default storage class for EKS clusters
milldr Oct 21, 2024
039d2c4
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 21, 2024
9aa638a
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 21, 2024
91c080a
Decide on default storage class recommendations
milldr Oct 21, 2024
e36e463
Update default storage class recommendations in EKS design
milldr Oct 21, 2024
aa63408
Update storage class comparison in EKS design decisions
milldr Oct 22, 2024
594a831
Update default storage class decision key points
milldr Oct 22, 2024
7cc39df
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 22, 2024
309f0d0
Apply suggestions from code review
milldr Oct 22, 2024
89a3313
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 22, 2024
30850e2
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
c8beb62
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
f5e7f65
Merge branch 'master' into feat/decide-on-storage-class
milldr Oct 24, 2024
ab98f7a
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
9759c5d
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
f84ecd0
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
e7a5b3a
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
13aabdb
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
53c6383
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
591f3c6
Apply suggestions from code review
milldr Oct 24, 2024
34bde7a
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
10031d1
Update storage class decision considerations
milldr Oct 24, 2024
a963698
Decide on default storage class guidelines updated
milldr Oct 24, 2024
8061a93
Update EFS usage warning in storage class decision
milldr Oct 24, 2024
04316d4
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
cce7fea
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
cb845e9
Update docs/layers/eks/design-decisions/decide-on-default-storage-cla…
milldr Oct 24, 2024
b44934f
Apply suggestions from code review
milldr Oct 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "Decide on Default Storage Class for EKS Clusters"
sidebar_label: "Default Storage Class"
description: Determine the default storage class for Kubernetes EKS clusters
---
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';

<Intro>
When provisioning EKS (Kubernetes) clusters, there is no one-size-fits-all default storage class. The right choice depends on your specific workload requirements, including performance, scalability, and cost-efficiency. Storage classes are not mutually exclusive, and in many cases, the best solution might involve using a combination of options to meet different needs.
milldr marked this conversation as resolved.
Show resolved Hide resolved
</Intro>


milldr marked this conversation as resolved.
Show resolved Hide resolved
## Default Storage Class Options

We need to decide between **Amazon EFS (Elastic File System)** and **Amazon EBS (Elastic Block Store)** as the default storage class for our EKS clusters.

<KeyPoints title="Top Considerations">
- Availability Zone Lock-in: EBS volumes are restricted to a single Availability Zone, which can impact high availability and disaster recovery strategies. This is a key drawback of using EBS. If you need a Pod to recover across multiple AZs, EFS is a more suitable option, though it comes at a higher cost.
- Performance: EFS generally offers lower performance when compared to EBS. This can be mitigated by paying for additional bandwidth but has routinely caused outages due to throttling even with low-performance applications. Additionally, poor lock performance makes EFS completely unsuitable for high-performance applications like RDBMS.
- Cost: EFS is significantly more expensive than EBS, at least 3x the price per GB and potentially more depending on performance demands, although there may be some savings from not having to reserve size for future growth.
milldr marked this conversation as resolved.
Show resolved Hide resolved
- Concurrent Access: EBS volumes can only be attached to one instance at a time within the same Availability Zone, making them unsuitable for scenarios that require concurrent access from multiple instances. In contrast, EFS allows multiple instances or Pods to access the same file system concurrently, which is useful for distributed applications or workloads requiring shared storage across multiple nodes.
milldr marked this conversation as resolved.
Show resolved Hide resolved
</KeyPoints>

## Amazon EFS

**Amazon EFS** provides a scalable, fully managed, elastic file system with NFS compatibility, designed for use with AWS Cloud services and on-premises resources.

### Pros:
- **Unlimited Disk Space:** Automatically scales storage capacity as needed without manual intervention.
- **Shared Access:** Allows multiple pods to access the same file system concurrently, facilitating shared storage scenarios.
- **Managed Service:** Fully managed by AWS, reducing operational overhead for maintenance and scaling.
milldr marked this conversation as resolved.
Show resolved Hide resolved
- **Availability Zone Failover**: For workloads that require failover across multiple Availability Zones, EFS is a more suitable option. It provides multi-AZ durability, ensuring that Pods can recover and access persistent storage seamlessly across different AZs.

### Cons:
- **Lower Performance:** Generally offers lower performance compared to EBS, with throughput as low as 100 MB/s, which may not meet the demands of even modestly demanding applications.
- **Higher Cost:** Significantly more expensive than EBS, at least 3x the price per GB and potentially more depending on performance demands, although there may be some savings from not having to reserve size for future growth.
- **Higher Latency:** Higher latency compared to EBS, which may impact performance-sensitive applications.
milldr marked this conversation as resolved.
Show resolved Hide resolved
- **No Native Backup Support:** EFS lacks a built-in, straightforward backup and recovery solution for EKS. Kubernetes-native tools don’t support EFS backups directly, requiring the use of alternatives like AWS Backup. Recovery, however, can be non-trivial and may involve complex configurations to restore data effectively.

## Amazon EBS

**Amazon EBS** provides high-performance block storage volumes for use with Amazon EC2 instances, suitable for a wide range of workloads.

### Pros:
- **Higher Performance:** Offers high IOPS and low latency, making it ideal for performance-critical applications.
- **Cost-Effective:** Potentially lower costs for specific storage types and usage scenarios.
- **Native EKS Integration:** Well-integrated with Kubernetes through the EBS CSI (Container Storage Interface) driver, facilitating seamless provisioning and management.
- **Supports Snapshot and Backup:** Supports snapshotting for data backup, recovery, and cloning.

### Cons:
- **Single-Attach Limitation:** EBS volumes may only be attached to a single node at a time, limiting shared access across multiple pods.
- **Data Sharing:** Data cannot be easily shared across multiple instances, requiring additional configurations or solutions for shared access.
milldr marked this conversation as resolved.
Show resolved Hide resolved
- **Availability Zones:** EBS volumes are confined to a single Availability Zone, limiting high availability and disaster recovery across zones. This limitation can be mitigated by configuring a `StatefulSet` with replicas spread across multiple AZs. However, for workloads using EBS-backed Persistent Volume Claims (PVCs), failover to a different AZ requires manual intervention, including provisioning a new volume in the target zone, as EBS volumes cannot be moved between zones.
- **Non-Elastic Storage:** EBS volumes can be manually resized, but this process is not fully automated in EKS. After resizing an EBS volume, additional manual steps are required to expand the associated Persistent Volume (PV) and Persistent Volume Claim (PVC). This introduces operational complexity, especially for workloads with dynamic storage needs, as EBS lacks automatic scaling like EFS.

## Recommendation

Use **Amazon EBS** as the primary storage option when:

- High performance, low-latency storage is required for workloads confined to a single Availability Zone.
- The workload doesn’t require shared access across multiple Pods.
- You need cost-effective storage with support for snapshots and backups.
- Manual resizing of volumes is acceptable for capacity management, recognizing that failover across AZs requires manual intervention and provisioning.

Consider **Amazon EFS** when:

- Multiple Pods need concurrent read/write access to shared data across nodes.
- Workloads must persist data across multiple Availability Zones for high availability, and the application does not support native replication.
- Elastic, automatically scaling storage is necessary to avoid manual provisioning, especially for workloads with unpredictable growth.
- You are willing to trade off higher costs and lower performance for multi-AZ durability and easier management of shared storage.

:::important
EFS should never be used as backend storage for performance-sensitive applications like databases, due to its high latency and poor performance under heavy load.
:::
Loading