Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to disable manually added cold or frozen tier: 'master' node type is missing #343

Closed
4 tasks done
chingis-elastic opened this issue Jul 27, 2021 · 3 comments
Closed
4 tasks done
Labels
bug Something isn't working theme:topology

Comments

@chingis-elastic
Copy link

chingis-elastic commented Jul 27, 2021

Readiness Checklist

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed (see possible related issue in the end)
  • I am reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

When working with multiple data tiers, after creating hot tier and enabling cold or frozen tier manually in User console, it should be possible to remove/disable cold/frozen tiers via terraform apply.

Current Behavior

After enabling cold or frozen tier manually, running terraform apply without those tiers in state, it fails with

Error: failed updating deployment: 2 errors occurred:
│ 	* api error: clusters.cluster_invalid_plan: Cluster must contain at least a master topology element and a data topology element. 'master' node type is missing,'master' node type exists in more than one topology element (resources.elasticsearch[0].cluster_topology)
│ 	* api error: deployments.elasticsearch.node_roles_error: Invalid node_roles configuration: The node_roles in the plan contains values not present in the template. [id = hot_content] (resources.elasticsearch[0])
Full output
> terraform apply
ec_deployment.logging: Refreshing state... [id=f0d0301b0b15fbec403a5cc2b6211fc5]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # ec_deployment.logging has been changed
  ~ resource "ec_deployment" "logging" {
        id                     = "f0d0301b0b15fbec403a5cc2b6211fc5"
        name                   = "logging"
        tags                   = {}
        # (6 unchanged attributes hidden)

      ~ elasticsearch {
            # (7 unchanged attributes hidden)

          ~ topology {
              ~ id                        = "hot_content" -> "frozen"
              ~ instance_configuration_id = "aws.data.highio.i3" -> "aws.es.datafrozen.i3en"
              ~ node_roles                = [
                  - "data_content",
                  + "data_frozen",
                  - "data_hot",
                  - "ingest",
                  - "master",
                  - "remote_cluster_client",
                  - "transform",
                ]
              ~ size                      = "2g" -> "4g"
                # (2 unchanged attributes hidden)

              ~ autoscaling {
                  ~ max_size          = "116g" -> "120g"
                    # (1 unchanged attribute hidden)
                }
            }
          + topology {
              + id                        = "hot_content"
              + instance_configuration_id = "aws.data.highio.i3"
              + node_roles                = [
                  + "data_content",
                  + "data_hot",
                  + "ingest",
                  + "master",
                  + "remote_cluster_client",
                  + "transform",
                ]
              + size                      = "2g"
              + size_resource             = "memory"
              + zone_count                = 1

              + autoscaling {
                  + max_size          = "116g"
                  + max_size_resource = "memory"
                }
            }

            # (1 unchanged block hidden)
        }
    }

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or
respond to these changes.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # ec_deployment.logging will be updated in-place
  ~ resource "ec_deployment" "logging" {
        id                     = "f0d0301b0b15fbec403a5cc2b6211fc5"
        name                   = "logging"
        tags                   = {}
        # (6 unchanged attributes hidden)

      ~ elasticsearch {
            # (7 unchanged attributes hidden)

          ~ topology {
              ~ id                        = "frozen" -> "hot_content"
              ~ size                      = "4g" -> "2g"
                # (4 unchanged attributes hidden)

                # (1 unchanged block hidden)
            }
          - topology {
              - id                        = "hot_content" -> null
              - instance_configuration_id = "aws.data.highio.i3" -> null
              - node_roles                = [
                  - "data_content",
                  - "data_hot",
                  - "ingest",
                  - "master",
                  - "remote_cluster_client",
                  - "transform",
                ] -> null
              - size                      = "2g" -> null
              - size_resource             = "memory" -> null
              - zone_count                = 1 -> null

              - autoscaling {
                  - max_size          = "116g" -> null
                  - max_size_resource = "memory" -> null
                }
            }

            # (1 unchanged block hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

ec_deployment.logging: Modifying... [id=f0d0301b0b15fbec403a5cc2b6211fc5]
╷
│ Error: failed updating deployment: 2 errors occurred:
│ 	* api error: clusters.cluster_invalid_plan: Cluster must contain at least a master topology element and a data topology element. 'master' node type is missing,'master' node type exists in more than one topology element (resources.elasticsearch[0].cluster_topology)
│ 	* api error: deployments.elasticsearch.node_roles_error: Invalid node_roles configuration: The node_roles in the plan contains values not present in the template. [id = hot_content] (resources.elasticsearch[0])
│
│
│
│   with ec_deployment.logging,
│   on deployment.tf line 17, in resource "ec_deployment" "logging":
│   17: resource "ec_deployment" "logging" {
│
╵

Terraform definition

deployment.tf
> cat deployment.tf
terraform {
  required_version = ">= 1.0.3"

  required_providers {
    ec = {
      source  = "elastic/ec"
      version = "0.2.1"
    }
  }
}

provider "ec" {
  endpoint = "https://cloud.elastic.co"
  apikey = "..."
}

resource "ec_deployment" "logging" {
  name = "logging"

  region                 = "us-east-1"
  version                = "7.13.4"
  deployment_template_id = "aws-io-optimized-v2"

  elasticsearch {
    topology {
      id         = "hot_content"
      zone_count = 1
      size       = "2g"
    }
  }
}

Steps to Reproduce

  1. create deployment with configuration above
  2. go to User Console, Edit Deployment -> Add cold or frozen tier
  3. run terraform apply and get above error

Note that in case of warm tier instead of cold or frozen, this scenario doesn't lead to failure - manually enabled warm nodes get removed after terraform apply.

Context

We use terraform cloud provider in internal testing routines to create deployment and we enable frozen tier via our own tooling (which will be later migrated over to TF). Subsequent terraform apply call fails, which is not supposed to be the case (judging by success in case of warm tier).

Possible Solution

N/A

Your Environment

  • Version used:
 > terraform version
Terraform v1.0.3
on darwin_amd64
+ provider registry.terraform.io/elastic/ec v0.2.1
  • Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version: https://cloud.elastic.co
  • Operating System and version: macOS 10.14.6

Related

Might be related: #286

@chingis-elastic chingis-elastic added bug Something isn't working Team:Delivery labels Jul 27, 2021
@marclop
Copy link
Contributor

marclop commented Jul 27, 2021

Thanks for opening the issue @chingis-elastic, #336 is the issue that's related to it, and there's a few things that are augmenting this problem. The diffing doesn't seem to be quite right, I'm currently looking into it

@perevernihata
Copy link

any updates or workarounds on this issue guys? we are experiencing it as well :(

@tobio
Copy link
Member

tobio commented Aug 4, 2023

Duplicates #635

@tobio tobio closed this as completed Aug 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working theme:topology
Projects
None yet
Development

No branches or pull requests

6 participants