You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I checked to make sure that this issue has not already been filed (see possible related issue in the end)
I am reporting the issue to the correct repository (for multi-repository projects)
Expected Behavior
When working with multiple data tiers, after creating hot tier and enabling cold or frozen tier manually in User console, it should be possible to remove/disable cold/frozen tiers via terraform apply.
Current Behavior
After enabling cold or frozen tier manually, running terraform apply without those tiers in state, it fails with
Error: failed updating deployment: 2 errors occurred:
│ * api error: clusters.cluster_invalid_plan: Cluster must contain at least a master topology element and a data topology element. 'master' node type is missing,'master' node type exists in more than one topology element (resources.elasticsearch[0].cluster_topology)
│ * api error: deployments.elasticsearch.node_roles_error: Invalid node_roles configuration: The node_roles in the plan contains values not present in the template. [id = hot_content] (resources.elasticsearch[0])
Full output
> terraform apply
ec_deployment.logging: Refreshing state... [id=f0d0301b0b15fbec403a5cc2b6211fc5]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# ec_deployment.logging has been changed
~ resource "ec_deployment" "logging" {
id = "f0d0301b0b15fbec403a5cc2b6211fc5"
name = "logging"
tags = {}
# (6 unchanged attributes hidden)
~ elasticsearch {
# (7 unchanged attributes hidden)
~ topology {
~ id = "hot_content" -> "frozen"
~ instance_configuration_id = "aws.data.highio.i3" -> "aws.es.datafrozen.i3en"
~ node_roles = [
- "data_content",
+ "data_frozen",
- "data_hot",
- "ingest",
- "master",
- "remote_cluster_client",
- "transform",
]
~ size = "2g" -> "4g"
# (2 unchanged attributes hidden)
~ autoscaling {
~ max_size = "116g" -> "120g"
# (1 unchanged attribute hidden)
}
}
+ topology {
+ id = "hot_content"
+ instance_configuration_id = "aws.data.highio.i3"
+ node_roles = [
+ "data_content",
+ "data_hot",
+ "ingest",
+ "master",
+ "remote_cluster_client",
+ "transform",
]
+ size = "2g"
+ size_resource = "memory"
+ zone_count = 1
+ autoscaling {
+ max_size = "116g"
+ max_size_resource = "memory"
}
}
# (1 unchanged block hidden)
}
}
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or
respond to these changes.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# ec_deployment.logging will be updated in-place
~ resource "ec_deployment" "logging" {
id = "f0d0301b0b15fbec403a5cc2b6211fc5"
name = "logging"
tags = {}
# (6 unchanged attributes hidden)
~ elasticsearch {
# (7 unchanged attributes hidden)
~ topology {
~ id = "frozen" -> "hot_content"
~ size = "4g" -> "2g"
# (4 unchanged attributes hidden)
# (1 unchanged block hidden)
}
- topology {
- id = "hot_content" -> null
- instance_configuration_id = "aws.data.highio.i3" -> null
- node_roles = [
- "data_content",
- "data_hot",
- "ingest",
- "master",
- "remote_cluster_client",
- "transform",
] -> null
- size = "2g" -> null
- size_resource = "memory" -> null
- zone_count = 1 -> null
- autoscaling {
- max_size = "116g" -> null
- max_size_resource = "memory" -> null
}
}
# (1 unchanged block hidden)
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
ec_deployment.logging: Modifying... [id=f0d0301b0b15fbec403a5cc2b6211fc5]
╷
│ Error: failed updating deployment: 2 errors occurred:
│ * api error: clusters.cluster_invalid_plan: Cluster must contain at least a master topology element and a data topology element. 'master' node type is missing,'master' node type exists in more than one topology element (resources.elasticsearch[0].cluster_topology)
│ * api error: deployments.elasticsearch.node_roles_error: Invalid node_roles configuration: The node_roles in the plan contains values not present in the template. [id = hot_content] (resources.elasticsearch[0])
│
│
│
│ with ec_deployment.logging,
│ on deployment.tf line 17, in resource "ec_deployment" "logging":
│ 17: resource "ec_deployment" "logging" {
│
╵
go to User Console, Edit Deployment -> Add cold or frozen tier
run terraform apply and get above error
Note that in case of warm tier instead of cold or frozen, this scenario doesn't lead to failure - manually enabled warm nodes get removed after terraform apply.
Context
We use terraform cloud provider in internal testing routines to create deployment and we enable frozen tier via our own tooling (which will be later migrated over to TF). Subsequent terraform apply call fails, which is not supposed to be the case (judging by success in case of warm tier).
Possible Solution
N/A
Your Environment
Version used:
> terraform version
Terraform v1.0.3
on darwin_amd64
+ provider registry.terraform.io/elastic/ec v0.2.1
Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version: https://cloud.elastic.co
Thanks for opening the issue @chingis-elastic, #336 is the issue that's related to it, and there's a few things that are augmenting this problem. The diffing doesn't seem to be quite right, I'm currently looking into it
Readiness Checklist
Expected Behavior
When working with multiple data tiers, after creating hot tier and enabling cold or frozen tier manually in User console, it should be possible to remove/disable cold/frozen tiers via
terraform apply
.Current Behavior
After enabling cold or frozen tier manually, running
terraform apply
without those tiers in state, it fails withFull output
Terraform definition
deployment.tf
Steps to Reproduce
terraform apply
and get above errorNote that in case of
warm
tier instead ofcold
orfrozen
, this scenario doesn't lead to failure - manually enabled warm nodes get removed afterterraform apply
.Context
We use terraform cloud provider in internal testing routines to create deployment and we enable frozen tier via our own tooling (which will be later migrated over to TF). Subsequent
terraform apply
call fails, which is not supposed to be the case (judging by success in case ofwarm
tier).Possible Solution
N/A
Your Environment
Related
Might be related: #286
The text was updated successfully, but these errors were encountered: