You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Did make sure that my manifest seems to be correct, so my local attempt to call terraform plan does say that everything is up to date and no actions needed
After creating an branch and pushing all that to github I have noticed that plan there differs and was:
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# ec_deployment.dev will be updated in-place
~ resource "ec_deployment" "dev" {
+ apm_secret_token = (sensitive value)
~ elasticsearch = {
~ cloud_id = "dev:BASE64=" -> (known after apply)
~ config = {
+ plugins = (known after apply)
}
~ http_endpoint = "http://00000000000000000000000000000000.europe-west3.gcp.cloud.es.io:9200" -> (known after apply)
~ https_endpoint = "https://00000000000000000000000000000000.europe-west3.gcp.cloud.es.io:443" -> (known after apply)
~ resource_id = "00000000000000000000000000000000" -> (known after apply)
# (12 unchanged attributes hidden)
}
id = "00000000000000000000000000000000"
name = "dev"
# (7 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
And what is even worse after applied it performs changes to CloudElastic which means it become unresponsive
From a portal I see an action to Kibana, saying that there is no significant changes and for Elastic I see:
Set http.cors_enabled to false
Set http.cors_allow_credentials to false
Set http.cors_max_age to 1728000
Set http.compression to true
Set monitoring_history_duration to 3d
Set monitoring_collection_interval to -1
Set destructive_requires_name to false
Set auto_create_index to true
Set scripting.inline.enabled to true
Set scripting.stored.enabled to true
Which makes import process really scary, I wish not to touch/import production deployment till understand whats going on
There are two issues:
different plans locally and remotely
changes to elastic should not be done at all we are just importing everything as is
I have second staging cluster, which have exactly same issues, so if I can help with some additional details will be glad to do that
What is even wrong, even so changes were applied, second run going to apply them once again 🤦♂️ while its happening I did run TF_LOG=TRACE terraform plan -no-color locally and in github, trying to check if there any significant difference and how noticed that terraform itself has different versions, so I have maked sure that my local terraform is exactly the same as in github and finally see the same change locally, but still it is not clear why does terraform tries to perform this changes
While I can reproduce it locally and it is an dev deployment, have tried to apply this changes, from elastic cloud portal I see applied changes with message
And indeed apply happened much faster than previous time
But still each time I run terraform plan it still says that it will perform in place update 🤔
mac2000
changed the title
importing existing deployment local and remote execution of terraform plan gives different results
importing existing deployment
Mar 27, 2023
I have initialised fresh terraform with latest versions of everything
Imported deployment
terraform import ec_deployment.dev 00000000000000000000000000000000
Did make sure that my manifest seems to be correct, so my local attempt to call
terraform plan
does say that everything is up to date and no actions neededAfter creating an branch and pushing all that to github I have noticed that plan there differs and was:
And what is even worse after applied it performs changes to CloudElastic which means it become unresponsive
From a portal I see an action to Kibana, saying that there is no significant changes and for Elastic I see:
Which makes import process really scary, I wish not to touch/import production deployment till understand whats going on
There are two issues:
I have second staging cluster, which have exactly same issues, so if I can help with some additional details will be glad to do that
What is even wrong, even so changes were applied, second run going to apply them once again 🤦♂️ while its happening I did run
TF_LOG=TRACE terraform plan -no-color
locally and in github, trying to check if there any significant difference and how noticed that terraform itself has different versions, so I have maked sure that my local terraform is exactly the same as in github and finally see the same change locally, but still it is not clear why does terraform tries to perform this changesWhile I can reproduce it locally and it is an dev deployment, have tried to apply this changes, from elastic cloud portal I see applied changes with message
And indeed apply happened much faster than previous time
But still each time I run terraform plan it still says that it will perform in place update 🤔
Just in case here is terraform manifest
main.tf
The text was updated successfully, but these errors were encountered: