Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_emr_cluster > 1.55.0 causes the cluster to be rebuild every run #7405

Closed
cullenmcdermott opened this issue Jan 31, 2019 · 2 comments
Closed
Labels
bug Addresses a defect in current functionality. service/emr Issues and PRs that pertain to the emr service. stale Old or inactive issues managed by automation, if no further action taken these will get closed.

Comments

@cullenmcdermott
Copy link

cullenmcdermott commented Jan 31, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

0.10.8 - I know this is very old and I can confirm that this does not happen on 0.11.11. However we did not see this issue until we pulled in 1.56.0 of the provider.

Affected Resource(s)

  • aws_emr_cluster

Terraform Configuration Files

provider "aws" {
  version = "1.57.0"
  region  = "us-west-2"
}

resource "aws_emr_cluster" "test" {
  name             = "tf-bug-emr-test"
  release_label    = "emr-5.14.0"
  applications     = ["spark", "hadoop", "sqoop"]
  service_role     = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/EMR_DefaultRole"
  autoscaling_role = "EMR_AutoScaling_DefaultRole"

  ec2_attributes {
    subnet_id        = "${data.aws_subnet.az1.id}"
    instance_profile = "${data.aws_iam_instance_profile.my_profile.arn}"
  }

  instance_group {
    instance_role  = "MASTER"
    instance_type  = "m4.large"
    instance_count = 1

    ebs_config {
      size                 = "25"
      type                 = "gp2"
      volumes_per_instance = 1
    }
  }

  instance_group {
    instance_role  = "CORE"
    instance_type  = "m4.large"
    instance_count = 1

    ebs_config {
      size                 = "25"
      type                 = "gp2"
      volumes_per_instance = 1
    }
  }

  instance_group {
    instance_role      = "TASK"
    instance_type      = "m4.large"
    instance_count     = 1
    autoscaling_policy = "${data.template_file.emr_autoscale_policy_task.rendered}"
  }

  lifecycle {
    create_before_destroy = true

    ignore_changes = [
      "ec2_attributes.0.emr_managed_master_security_group",
      "ec2_attributes.0.emr_managed_slave_security_group",
      "ec2_attributes.0.service_access_security_group",
    ]
  }

  tags {
    Name = "test-emr-tf-bug"
  }
}

data "aws_subnet" "az1" {
  filter {
    name   = "tag:Name"
    values = ["my_subnet_name"]
  }
}

data "aws_caller_identity" "current" {}

data "template_file" "emr_autoscale_policy_task" {
  template = "${file("${path.module}/emr_autoscale_policy.json.tpl")}"

  vars {
    min_capacity = "2"
    max_capacity = "3"
  }
}

Debug Output

Plan Debug Output: https://gist.github.com/cullenmcdermott/48d2a109d0674d44aa43612b170a6bea
Apply Debug Output: https://gist.github.com/cullenmcdermott/869ec4b1388d0cefb62cbc5ff4475302

Panic Output

Panic Output: https://gist.github.com/cullenmcdermott/b613be75c04869bf3a48244ae0bb90af

Expected Behavior

I should be able to run terraform apply on this configuration multiple times and my cluster will not be rebuilt.

Actual Behavior

terraform will panic when I run an apply, then I run another apply and it will recreate my cluster and delete my old cluster.

Steps to Reproduce

The cause of the issue in my configuration appears to be the fact that I set my instance_count in my TASK instance_group to 1 and then in my autoscaling policy I set the min/max to 2 and 3. If I set the instance_count and min values to the same number
This actually doesn't fix it. As soon as my instances scale from 2 to 3 then it tries to rebuild the cluster again.

  1. terraform apply - This creates the cluster the first time
  2. terraform plan - Shows that my cluster is going to be destroyed and recreated
  3. terraform plan - Terraform panics before doing anything
  4. terraform apply - Terraform creates a new cluster and deletes my previous cluster

Important Factoids

  1. We are indeed on a very old version of Terraform (0.10.8) and upgrading to 0.11.11 fixes it. However I'm just confused what changed in the provider 1.56 to cause this to start happening.
  2. Pinning the provider to 1.55.0 does not cause the cluster to be recreated.
  3. The mismatch reason is confusing to me and is the main reason I'm logging this issue. I just want to make sure there's not a bug in the provider that my ancient version of terraform has uncovered.
Mismatch reason: attribute mismatch: instance_group.1757593885.ebs_config.2636219798.iops
```
@nywilken nywilken added the service/emr Issues and PRs that pertain to the emr service. label Jan 31, 2019
@nywilken nywilken added bug Addresses a defect in current functionality. thinking service/athena Issues and PRs that pertain to the athena service. labels Feb 11, 2019
@nywilken nywilken removed the thinking label Feb 26, 2019
@bflad bflad removed the service/athena Issues and PRs that pertain to the athena service. label Jul 11, 2019
@github-actions
Copy link

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@github-actions github-actions bot added the stale Old or inactive issues managed by automation, if no further action taken these will get closed. label Jul 15, 2021
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 15, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/emr Issues and PRs that pertain to the emr service. stale Old or inactive issues managed by automation, if no further action taken these will get closed.
Projects
None yet
Development

No branches or pull requests

3 participants