Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reset_elasticsearch_password needs to be run 2 times to succeed. 1st attempt provides an error #766

Closed
paulrossmeier opened this issue Jan 16, 2024 · 0 comments · Fixed by #777
Labels
bug Something isn't working

Comments

@paulrossmeier
Copy link

Readiness Checklist

  • [X ] I am running the latest version
  • [X ] I checked the documentation and found no answer
  • [X ] I checked to make sure that this issue has not already been filed
  • [X ] I am reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

AFTER removing ec_deployment from state to allow for upgrade from version < 0.6.0 to version > 0.6.0, and then importing into state, running reset_elasticsearch_password in plan should set elasticsearch_username = "elastic"

Current Behavior

Terraform plan does not see that reset_elasticsearch_password will update elasticsearch_username the 1st time that it runs
Because the plan does not see this, upon the apply, when elasticsearch_username value does not match the plan value, error is thrown

╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to ec_deployment.cluster, provider "provider[\"registry.terraform.io/elastic/ec\"]" produced an unexpected new value: .elasticsearch_username: was cty.StringVal(""), but now cty.StringVal("elastic").
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵

Upon a 2nd apply, the plan sees that the elasticsearch_username will be updated and is successful

Plan: 0 to add, 1 to change, 0 to destroy.

Changes to Outputs:
  ~ elasticsearch_password       = (sensitive value)
  ~ elasticsearch_username       = "" -> "elastic"

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Outputs:

elasticsearch_cloud_id = "REDACTED"
elasticsearch_deployment_id = "aws-general-purpose-arm-v5"
elasticsearch_https_endpoint = "REDACTED:443"
elasticsearch_password = <sensitive>
elasticsearch_username = "elastic"
elasticsearch_version = "8.9.1"

## Terraform definition

terraform {
  required_version = "1.6.3"
}

terraform {
  required_providers {
    ec = {
      source  = "elastic/ec"
      version = "0.10.0"
    }
    elasticstack = {
      source  = "elastic/elasticstack"
      version = ">= 0.7.0"
    }
  }
}

provider "ec" {
  apikey = REDACTED
}

provider "elasticstack" {
  elasticsearch {}
}


# this is the import command -> need to make sure that the deployment ID is correct
import {
  to = ec_deployment.cluster
  id = "REDACTED"
}


resource "ec_deployment" "cluster" {
  name                   = var.component_aka_es_domain_name
  alias                  = var.component_aka_es_domain_name
  region                 = "us-east-1"
  version                = "8.9.1"
  deployment_template_id = "aws-general-purpose-arm-v5"
  traffic_filter         = [ec_deployment_traffic_filter.allow_all.id]
  reset_elasticsearch_password = true

  elasticsearch = {
    autoscale = "false"
    cold = {
      autoscaling = {}
      zone_count  = "3"
      size        = "0g"
    }
    frozen = {
      autoscaling = {}
      zone_count  = "3"
      size        = "0g"
    }
    hot = {
      autoscaling = {}
      zone_count  = "3"
      size        = "8g"
    }
    master = {
      autoscaling = {}
      zone_count  = "3"
      size        = "0g"
    }
    ml = {
      autoscaling = {}
      zone_count  = "1"
      size        = "0g"
    }
    warm = {
      autoscaling = {}
      zone_count  = "2"
      size        = "0g"
    }
  }
  kibana = {
    topology = {
      zone_count = "1"
    }
  }
  tags = {
    Name      = var.name_of_project
    component = var.component_aka_es_domain_name
  }
}


resource "ec_deployment_traffic_filter" "allow_all" {
  name   = "paul_test"
  region = "us-east-1"
  type   = "ip"

  rule {
    source = "0.0.0.0/0"
  }
}

Steps to Reproduce

  1. Build deployment
  2. remove deployment from state terraform state rm ec_deployment.$name
  3. Add import statement and reset_elasticsearch_password = true
  4. terraform apply X2

Context

This issue effects the ability to move from EC provider versions < 0.6.0 to move to the most recent EC provider version.
This greatly hinders when using the EC provider to manage Elastic cloud as well as the elasticsearch provider to manage the cluster. Having to remove then import the deployment state does not import the elasticsearch_username or elasticsearch_password values stored in the state -> so after import, it is required to use reset_elasticsearch_password to get those values back in the state
Those values are need in the state as elasticsearch provider is used in modules.

Possible Solution

Your Environment

  • Version used: 0.10.0
  • Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version: Cloud SaaS
  • Environment name and version (e.g. Go 1.9): go version go1.21.6 linux/amd64
  • Server type and version:
  • Operating System and version: CentOS 7
  • Link to your project:
@paulrossmeier paulrossmeier added the bug Something isn't working label Jan 16, 2024
@tobio tobio closed this as completed in #777 Feb 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant