-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Change EKS default version to 1.17 #949
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Pushing out a major version every time AWS releases a new version seems a little silly to me. Can't we just make @barryib @max-rocket-internet what do you think? |
Just rebased against the latest |
One note on this. When using
|
The AMI version is retrieved automatically from @gdurandvadas What AWS Terraform provider version are you using? |
@gdurandvadas I think it's related to this, in which case I'd need to bump the provider version:
I just pushed another commit, can you try again please? |
A counterargument on this: look at the comment from @gdurandvadas above. He discovered that when upgrading his cluster to 1.17, he ran into a problem. Now, the problem wasn't related to this module; instead, it was a problem with the version of the AWS provider he was using, and was fixed in provider version |
@sc250024 thanks for the update, agreed that this is not the module itself. |
Bear in mind that I'm using |
We can, and do, put out major versions when other breaking changes are made. I don't think that AWS releasing a new supported version of kubernetes in EKS should trigger the module to release a major version. This is unnecessary work for users. Looking through the changelog, I think the following releases were major version bumps just because of the kubernetes version: OK, that's less than I would have thought. My memory must be biased due to the previous two kubernetes releases. 1.11, 1.13 and 1.14 were mixed in with other big changes. But having a major release pending does seem to delay the potential for a release. Mainly because we only run a single branch. I still think having the version set as a default is a little silly 😉 |
@dpiddockcmp In the end, it's your guys' module. I don't mind the version bump personally, but if it's decided to get rid of it, then 🤷♂️ @filipedeo @barryib Is there anything else needed for this to get merged? |
@dpiddockcmp I don't have a strong opinion on that. Maybe @max-rocket-internet or @antonbabenko have one ? But i think that we shouldn’t be afraid of bumping a major release as long as our documentation and module are consistent for a particular EKS version.
@sc250024 To me it's a community module. It's belong to the community. We are only actual maintainers. That's why @dpiddockcmp is starting a discussion, otherwise he would just drop the default value without asking. Again thank you for your contributions and more broadly to the whole community ❤️❤️❤️ |
Signed-off-by: Scott Crooks <scott.crooks@gmail.com>
This commit bumps the version of the AWS Terraform provider to solve the problem originally reported here: hashicorp/terraform-provider-aws#12675.
@barryib I made the doc changes you requested, and pushed. Would you mind re-reviewing please? |
Thanks @sc250024 for your contribution. It's LGTM. Let's wait a bit for @max-rocket-internet or @antonbabenko thought about removing the cluster_version default value. |
@barryib Alright. |
LGTM, too. Thanks, @sc250024 |
LGTM, too. What is the blocker of the merge here ? |
In the meantime, I updated with eksctl : kubectl scale deployments/cluster-autoscaler --replicas=0 -n kube-system
eksctl upgrade nodegroup --name=node-group-name --cluster=cluster-name --kubernetes-version=1.17 Good luck! |
@max-rocket-internet Anything to add? |
gentle-ping :) |
oooh I could go either way on this one, I don't think it's a big deal to remove the default.....@dpiddockcmp makes good points as always, I think it sounds like a good idea, it would reduce amount of releases and could also perhaps stop a few issues from people asking for the module to "support" the new version (#775 #856) 👍 |
@barryib may you please lift your request for change/can we get this in? (or is a change needed?) |
So this PR has been open for almost a month. I'm fine with making changes, but someone has to pull the trigger. What's the verdict? |
:( |
@sc250024 does it mean 1.17 is supported with the latest release here? Or is eksctl still needed as mentioned by @abdennour :
|
You'd still need eksctl @arashkaffamanesh, changes were not merged in. |
@arashkaffamanesh .. see answer of @filipedeo |
My 2 cents: I was able to upgrade different EKS clusters to version 1.17 using the latest release of this module (
One thing I should mention though, is that when I upgrade my cluster using Terraform, only the control plane is upgraded and I need to upgrade the AMIs of the worker nodes manually, using the AWS console. I've always assumed this was a limitation of managed node groups but looking at @gdurandvadas comments I'm wondering if I'm doing anything wrong in my configuration and I could managed worker nodes AMIs using Terraform. Just as a reference, here is an example of the configuration I usually use in my EKS clusters. module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 12.2"
cluster_version = "1.17"
cluster_name = var.eks_cluster_name
subnets = var.private_subnets
enable_irsa = true
tags = {
Company = var.company
Project = var.project
Environment = var.environment
Terraform = true
}
vpc_id = var.vpc_id
node_groups_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
}
node_groups = {
base-group = {
desired_capacity = 4
max_capacity = 5
min_capacity = 3
autoscaling_enabled = true
protect_from_scale_in = true
instance_type = "t3a.medium"
k8s_labels = {
company = var.company
project = var.project
environment = var.environment
terraform = true
}
additional_tags = {
Company = var.company
Project = var.project
Environment = var.environment
Terraform = true
# Required by IRSA autoscaler
"k8s.io/cluster-autoscaler/enabled" = true
"k8s.io/cluster-autoscaler/${var.eks_cluster_name}" = true
}
}
}
map_roles = var.map_roles
}
|
Not sure why the PR was closed and branch deleted? But AFAIK, there's nothing stopping anyone using this module to create/upgrade to 1.17. MNG issue mentioned here is not related to this module. Or am I misunderstanding? |
I can confirm 1.17.9 is working fine with aws provider 2.70.0 and with the cluster-autoscaler and spot-instances, the only issue which I had was getting the autoscaler deployed with helm to work, somehow it didn't like to work and I had to use an older implementation from 1.15. with it, which worked out-of-the-box. @cippaciong thanks so much for the hint about the provider aws = "2.70.0"!
|
I am guessing because of non responsiveness of reviewers? |
Pretty much. And it appears #972 is the preferred solution anyway. |
@sc250024 Was in vacation for almost a month. Sorry for the delay. For the records, we don't need this PR to upgrade to EKS 1.17. Just use the latest version of the module and set your |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
Upgrading module to support EKS 1.17. Resolves issue #947.
Checklist