-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Every plan run resets upgrade_settings.max_surge for default_node_pool #24020
Comments
AKS changed the default of max surge in october release so that if you are k8s > 1.28. then max surge is defaulted to 10% (previously it was left blank which implied 1 under the covers) Release Release 2023-10-01 · Azure/AKS (github.com) Is there a way we could have done this better for terraform? |
Just to add, we've worked around this by explicitly setting |
Azure Kubernetes Service changed the default max surge in October release, so that if for clusters based on Kubernetes >1.28 max surge defaults to 10%, see https://github.com/Azure/AKS/releases/tag/2023-10-01 Previously it was left blank which implied use of value 1 under the bonnet. Using the current version of Terraform AzureRM 3.86.0 leads to implicit resetting of the max_surge: max_surge = "10%" -> null The only workaround to avoid such confusing annoyance is to set the max_surge with explicit value e.g. default "10%". But, this requires max_surge to be exposed to end-users of this module. See also hashicorp/terraform-provider-azurerm#24020 Closes claranet#6 Signed-off-by: Mateusz Łoskot <mateusz@loskot.net>
EDIT: Ignore me - rookie error, was working off the wrong duplicated file , this also resolved my issue when I modified the correct file
hi @aa2811, do you mind sharing how you did this?
|
I have been temporarily avoiding this with: in the resource "azurerm_kubernetes_cluster" I add a line in the lifecycle block(considering you do not change the default node pool like me). eg:
and for the resource "azurerm_kubernetes_cluster_node_pool" I do the same, eg:
|
this indeed is a workaround to the issue. Our team decided to always specify the Any suggestion? |
Yes, but if you want to change it, there is no longer a need for the
This issue does not apply to Spot instances since, as you already mentioned, it cannot be specified. So you are probably referring to tf code that tries to create both spot and on-demand node pools within the same logic, for which the solution would be to split it up in separate code for spot and on-demand. |
Yeah, you're right. Sorry but I was in a rush and didn't think properly to the question asked. My bad.
Well, kind of. I mean, we have a dedicated module for I was thinking in adding a dynamic [EDIT] Just found an open discussion and a closed PR (hashicorp/terraform#32608) about my idea of having dynamic |
…ure value for all nodepools to avoid permanent update (#727) as per - hashicorp/terraform-provider-azurerm#24020 we are having the same repetitive update on azure, so setting the default should avoir permanent update override
…r-ending updated attribute (#732) Follow up of #727 , #730 and #731 This fixes the never-ending changed attribute `upgrade_settings {}` for the 2 Spot node pools on `privatek8s`. It follows the tip found in hashicorp/terraform-provider-azurerm#24020 (comment) to avoid having all of our plans trying to change the 2 node pools. Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
so I faced the same problem. If anyone reads that, here is a workaround.
so you can always specify |
Is there an existing issue for this?
Community Note
Terraform Version
1.6.4
AzureRM Provider Version
3.82.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
Subsequent runs of
terraform plan
not reporting any changes toupgrade_settings.max_surge
property.Actual Behaviour
I use
azurerm_resource_group
to create new cluster without specifying customupgrade_settings
indefault_node_pool
and Azure calculates for me default "Maximum surge":Then, every time I run
terraform plan
, still without customisedupgrade_settings
in my `.tf, then the provider always try to modify my cluster with- upgrade_settings { - max_surge = "10%" -> null }
This is not an expected behaviour, is it?
Steps to Reproduce
terraform plan
terraform apply
terraform plan
terraform plan
Important Factoids
No response
References
No response
The text was updated successfully, but these errors were encountered: