Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Invalid for_each argument #2337

Closed
1 task done
AndreiBanaruTakeda opened this issue Dec 9, 2022 · 35 comments
Closed
1 task done

Error: Invalid for_each argument #2337

AndreiBanaruTakeda opened this issue Dec 9, 2022 · 35 comments

Comments

@AndreiBanaruTakeda
Copy link

AndreiBanaruTakeda commented Dec 9, 2022

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 19.0.4

  • Terraform version: 1.2.7

  • Provider version(s):

Reproduction Code [Required]

Steps to reproduce the behavior:

module "eks_managed_node_groups" {
  source  = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"
  version = "19.0.4"

  for_each        = { for index, name in var.node_group_config : index => name }
  cluster_name    = var.cluster_name
  name            = join("", [var.cluster_name, each.value.capacity_type == "ON_DEMAND" ? "-D-NG-" : "-S-NG-", each.key < 9 ? "00" : "0", each.key + 1])
  use_name_prefix = false

  vpc_security_group_ids = var.vpc_security_group_ids

  create_iam_role            = false
  iam_role_arn               = var.iam_role_arn
  iam_role_attach_cni_policy = false

  subnet_ids = var.subnet_ids

  min_size     = each.value.minimum_size
  max_size     = each.value.maximum_size
  desired_size = each.value.desired_size

  labels = each.value.labels

  create_launch_template          = true
  launch_template_name            = join("", [var.cluster_name, each.value.capacity_type == "ON_DEMAND" ? "-D-NG-" : "-S-NG-", each.key < 9 ? "00" : "0", each.key + 1])
  launch_template_use_name_prefix = false

  block_device_mappings = {
    sda1 = {
      device_name = "/dev/xvda"
      ebs = {
        volume_type = "gp3"
        volume_size = each.value.disk_size
      }
    }
  }

  ami_id                     = var.ami_id
  ami_type                   = "CUSTOM"
  bootstrap_extra_args       = "--container-runtime containerd"
  cluster_auth_base64        = var.cluster_auth_base64
  cluster_endpoint           = var.cluster_endpoint
  enable_bootstrap_user_data = true
  post_bootstrap_user_data   = "mount -o remount,noexec /dev/shm"

  capacity_type  = each.value.capacity_type
  instance_types = each.value.instance_types
  tags           = var.shared_tags
}

Expected behavior

Should create X number of managed node groups.

Actual behavior

TFE throws error that it can not create the X number of node group.

Terminal Output Screenshot(s)

Untitled

Additional context

This happened after the upgrade from 18.31.2 to 19.0.4.

@bryantbiggs
Copy link
Member

We'll need a reproduction that we can try out. This has variables all over the place that are unknown

@kevindashton
Copy link

kevindashton commented Dec 9, 2022

Same error after upgrading from 18.20.x to 19.0.4 but with the aws_ec2_tag.cluster_primary_security_group resource:

│ Error: Invalid for_each argument
│ 
│   on .terraform/modules/eks/main.tf line 88, in resource "aws_ec2_tag" "cluster_primary_security_group":
│   88:   for_each = { for k, v in merge(var.tags, var.cluster_tags) :
│   89:     k => v if local.create && k != "Name" && var.create_cluster_primary_security_group_tags && v != null
│   90:   }
│     ├────────────────
│     │ local.create is true
│     │ var.cluster_tags is empty map of string
│     │ var.create_cluster_primary_security_group_tags is true
│     │ var.tags is map of string with 15 elements
│ 

NOTE: downgrading back to 18.31.2 resolves the issue

@Pacobart
Copy link

I have the same issue using the eks-blueprints v4.18.1 which uses this modules version v18.29.1

Error:

```$ terraform plan -no-color -out=$PLAN
356Error: Invalid for_each argument
357  on .terraform/modules/eks.aws_eks/[[main.tf](http://main.tf/)](http://main.tf/) line 290, in resource "aws_iam_role_policy_attachment" "this":
358 290:   for_each = local.create_iam_role ? toset(compact(distinct(concat([
359 291:     "${local.policy_arn_prefix}/AmazonEKSClusterPolicy",
360 292:     "${local.policy_arn_prefix}/AmazonEKSVPCResourceController",
361 293:   ], var.iam_role_additional_policies)))) : toset([])
362    ├────────────────
363    │ local.create_iam_role is true
364    │ local.policy_arn_prefix is a string, known only after apply
365    │ var.iam_role_additional_policies is empty list of string
366The "for_each" value depends on resource attributes that cannot be determined
367until apply, so Terraform cannot predict how many instances will be created.
368To work around this, use the -target argument to first apply only the
369resources that the for_each depends on.`

Also relating to: #1753

@bryantbiggs
Copy link
Member

v18 of EKS is not related here since it had a known issue with computed values for IAM policies and security groups. v19 corrected that behavior

@dduportal
Copy link

Same here, on the Jenkins Infrastructure project, the upgrade from 18.x to 19.y breaks with this error unless we comment out the tags attribute (https://github.com/jenkins-infra/aws/blob/27d4f746748edcdb3ba49643cae3d2d329fb3153/eks-public-cluster.tf#L37-L42).

dduportal added a commit to jenkins-infra/aws that referenced this issue Dec 23, 2022
dduportal added a commit to jenkins-infra/aws that referenced this issue Dec 23, 2022
* chore: Updated the content of the file "/tmp/updatecli/github/jenkins...

... -infra/aws/eks-cluster.tf"
Updated the content of the file "/tmp/updatecli/github/jenkins-infra/aws/eks-public-cluster.tf"

Made with ❤️️ by updatecli

* chore: bump shared tools to latest version

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>

* chore: comment out EKS module global tags because of terraform-aws-modules/terraform-aws-eks#2337

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
Co-authored-by: Jenkins Infra Bot (updatecli) <60776566+jenkins-infra-bot@users.noreply.github.com>
Co-authored-by: Damien Duportal <damien.duportal@gmail.com>
@timtorChen
Copy link

timtorChen commented Dec 26, 2022

I face the same issue.

However, I think it is a little bit weird, adding the attribute create_iam_role = false would not trigger the aws_iam_role_policy_attachment block, would it?

])) : k => v if var.create && var.create_iam_role }

Maybe take the condition out of for .. in loop would fix the issue:

resource "aws_iam_role_policy_attachment" "this" {
  # for_each = { for k, v in toset(compact([
  #   "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
  #   "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
  #   var.iam_role_attach_cni_policy ? local.cni_policy : "",
  # ])) : k => v if var.create && var.create_iam_role }

  for_each = var.create && var.create_iam_role ? { for k, v in toset(compact([
    "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
    "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
    var.iam_role_attach_cni_policy ? local.cni_policy : "",
  ])) : k => v } : {}

  policy_arn = each.value
  role       = aws_iam_role.this[0].name
}

@wpbeckwith
Copy link

wpbeckwith commented Jan 8, 2023

The above code of moving the condition check out of the for_each does work. This should be standard syntax for all the resources with for_each. Or even better for_each should be made smarter. If it was a choice of fixing for_each or going all of 2023 without any other updates, I'd take for_each hands down.

@bryantbiggs
Copy link
Member

Without a proper reproduction that can be executed to show the error, we won't be able to offer any guidance

All I can say is that the examples we have provided in this project do work as intended

@wpbeckwith
Copy link

OK, here's what I got. In the following output we can see a hacked up version of the karpenter.sh terraform config that creates a VPC, EKS cluster and external managed node group. After failure I then use atom to open the local file and make the above change and now terraform can successfully plan and apply.

githib-2337.zip

wbeckwith@overwatch githib-2337 % terraform init
Initializing modules...
- eks in eks-cluster2

Downloading registry.terraform.io/terraform-aws-modules/eks/aws 19.5.1 for eks.eks...
- eks.eks in .terraform/modules/eks.eks
- eks.eks.eks_managed_node_group in .terraform/modules/eks.eks/modules/eks-managed-node-group
- eks.eks.eks_managed_node_group.user_data in .terraform/modules/eks.eks/modules/_user_data
- eks.eks.fargate_profile in .terraform/modules/eks.eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 1.1.0 for eks.eks.kms...
- eks.eks.kms in .terraform/modules/eks.eks.kms
- eks.eks.self_managed_node_group in .terraform/modules/eks.eks/modules/self-managed-node-group
- eks.eks.self_managed_node_group.user_data in .terraform/modules/eks.eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 19.5.1 for eks.eks_managed_node_group...
- eks.eks_managed_node_group in .terraform/modules/eks.eks_managed_node_group/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks.eks_managed_node_group/modules/_user_data
- eks.managed_node_group_role in eks-node-role
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 3.18.1 for vpc...
- vpc in .terraform/modules/vpc

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding kbst/kustomization versions matching ">= 0.9.0, 0.9.0"...
- Finding hashicorp/aws versions matching ">= 3.72.0, >= 3.73.0, >= 4.34.0, >= 4.47.0"...
- Finding hashicorp/kubernetes versions matching ">= 2.10.0, >= 2.14.0, >= 2.16.1"...
- Finding hashicorp/tls versions matching ">= 3.0.0, >= 3.4.0, 4.0.4"...
- Finding gavinbunney/kubectl versions matching ">= 1.14.0"...
- Finding hashicorp/helm versions matching ">= 2.7.1"...
- Using kbst/kustomization v0.9.0 from the shared cache directory
- Using hashicorp/aws v4.49.0 from the shared cache directory
- Using hashicorp/kubernetes v2.16.1 from the shared cache directory
- Using hashicorp/tls v4.0.4 from the shared cache directory
- Using gavinbunney/kubectl v1.14.0 from the shared cache directory
- Using hashicorp/helm v2.8.0 from the shared cache directory
- Using hashicorp/cloudinit v2.2.0 from the shared cache directory

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

╷
│ Warning: Incomplete lock file information for providers
│ 
│ Due to your customized provider installation methods, Terraform was forced to calculate lock file checksums locally for the following providers:
│   - gavinbunney/kubectl
│   - hashicorp/aws
│   - hashicorp/cloudinit
│   - hashicorp/helm
│   - hashicorp/kubernetes
│   - hashicorp/tls
│   - kbst/kustomization
│ 
│ The current .terraform.lock.hcl file only includes checksums for darwin_amd64, so Terraform running on another platform will fail to install these providers.
│ 
│ To calculate additional checksums for another platform, run:
│   terraform providers lock -platform=linux_amd64
│ (where linux_amd64 is the platform to generate)
╵

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
wbeckwith@overwatch githib-2337 % terraform plan
module.eks.module.managed_node_group_role.data.aws_partition.current: Reading...
module.eks.module.eks.module.kms.data.aws_partition.current: Reading...
module.eks.module.eks.module.kms.data.aws_caller_identity.current: Reading...
module.eks.module.eks.data.aws_caller_identity.current: Reading...
data.aws_partition.current: Reading...
module.eks.module.eks.data.aws_partition.current: Reading...
module.eks.module.managed_node_group_role.data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.eks.module.kms.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.eks.data.aws_partition.current: Read complete after 0s [id=aws]
data.aws_availability_zones.available: Reading...
module.eks.module.managed_node_group_role.data.aws_iam_policy_document.assume_role_policy: Reading...
module.eks.module.eks.data.aws_iam_policy_document.assume_role_policy[0]: Reading...
module.eks.module.eks.data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2764486067]
module.eks.module.managed_node_group_role.data.aws_iam_policy_document.assume_role_policy: Read complete after 0s [id=2560088296]
module.eks.module.eks.data.aws_caller_identity.current: Read complete after 0s [id=763916856451]
module.eks.module.eks.data.aws_iam_session_context.current: Reading...
module.eks.module.eks.module.kms.data.aws_caller_identity.current: Read complete after 0s [id=763916856451]
data.aws_availability_zones.available: Read complete after 0s [id=us-west-2]
module.eks.module.eks.data.aws_iam_session_context.current: Read complete after 5s [id=arn:aws:sts::763916856451:assumed-role/AWSReservedSSO_AWSAdministratorAccess_8041dea1708bf70e/wendell.beckwith@redfin.com]
╷
│ Error: Invalid for_each argument
│ 
│   on .terraform/modules/eks.eks_managed_node_group/modules/eks-managed-node-group/main.tf line 434, in resource "aws_iam_role_policy_attachment" "this":
│  434:   for_each = { for k, v in toset(compact([
│  435:     "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
│  436:     "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
│  437:     var.iam_role_attach_cni_policy ? local.cni_policy : "",
│  438:   ])) : k => v if var.create && var.create_iam_role }
│     ├────────────────
│     │ local.cni_policy is a string, known only after apply
│     │ local.iam_role_policy_prefix is a string, known only after apply
│     │ var.create is true
│     │ var.create_iam_role is false
│     │ var.iam_role_attach_cni_policy is true
│ 
│ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will
│ identify the instances of this resource.
│ 
│ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
│ 
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully
│ converge.
╵
wbeckwith@overwatch githib-2337 % atom .terraform/modules/eks.eks_managed_node_group/modules/eks-managed-node-group/main.tf

wbeckwith@overwatch githib-2337 % terraform plan                                                                           
module.eks.module.eks.module.kms.data.aws_caller_identity.current: Reading...
data.aws_partition.current: Reading...
module.eks.module.eks.module.kms.data.aws_partition.current: Reading...
module.eks.module.managed_node_group_role.data.aws_partition.current: Reading...
module.eks.module.eks.data.aws_caller_identity.current: Reading...
data.aws_availability_zones.available: Reading...
module.eks.module.eks.data.aws_partition.current: Reading...
data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.eks.module.kms.data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.managed_node_group_role.data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.eks.data.aws_partition.current: Read complete after 0s [id=aws]
module.eks.module.managed_node_group_role.data.aws_iam_policy_document.assume_role_policy: Reading...
module.eks.module.eks.data.aws_iam_policy_document.assume_role_policy[0]: Reading...
module.eks.module.managed_node_group_role.data.aws_iam_policy_document.assume_role_policy: Read complete after 0s [id=2560088296]
module.eks.module.eks.data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2764486067]
module.eks.module.eks.data.aws_caller_identity.current: Read complete after 0s [id=763916856451]
module.eks.module.eks.module.kms.data.aws_caller_identity.current: Read complete after 0s [id=763916856451]
module.eks.module.eks.data.aws_iam_session_context.current: Reading...
data.aws_availability_zones.available: Read complete after 0s [id=us-west-2]
module.eks.module.eks.data.aws_iam_session_context.current: Read complete after 0s [id=arn:aws:sts::763916856451:assumed-role/AWSReservedSSO_AWSAdministratorAccess_8041dea1708bf70e/wendell.beckwith@redfin.com]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # module.vpc.aws_eip.nat[0] will be created
  + resource "aws_eip" "nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name" = "karpenter-demo-us-west-2a"
        }
      + tags_all             = {
          + "Name" = "karpenter-demo-us-west-2a"
        }
      + vpc                  = true
    }

  # module.vpc.aws_internet_gateway.this[0] will be created
  + resource "aws_internet_gateway" "this" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Name" = "karpenter-demo"
        }
      + tags_all = {
          + "Name" = "karpenter-demo"
        }
      + vpc_id   = (known after apply)
    }

  # module.vpc.aws_nat_gateway.this[0] will be created
  + resource "aws_nat_gateway" "this" {
      + allocation_id        = (known after apply)
      + connectivity_type    = "public"
      + id                   = (known after apply)
      + network_interface_id = (known after apply)
      + private_ip           = (known after apply)
      + public_ip            = (known after apply)
      + subnet_id            = (known after apply)
      + tags                 = {
          + "Name" = "karpenter-demo-us-west-2a"
        }
      + tags_all             = {
          + "Name" = "karpenter-demo-us-west-2a"
        }
    }

  # module.vpc.aws_route.private_nat_gateway[0] will be created
  + resource "aws_route" "private_nat_gateway" {
      + destination_cidr_block = "0.0.0.0/0"
      + id                     = (known after apply)
      + instance_id            = (known after apply)
      + instance_owner_id      = (known after apply)
      + nat_gateway_id         = (known after apply)
      + network_interface_id   = (known after apply)
      + origin                 = (known after apply)
      + route_table_id         = (known after apply)
      + state                  = (known after apply)

      + timeouts {
          + create = "5m"
        }
    }

  # module.vpc.aws_route.public_internet_gateway[0] will be created
  + resource "aws_route" "public_internet_gateway" {
      + destination_cidr_block = "0.0.0.0/0"
      + gateway_id             = (known after apply)
      + id                     = (known after apply)
      + instance_id            = (known after apply)
      + instance_owner_id      = (known after apply)
      + network_interface_id   = (known after apply)
      + origin                 = (known after apply)
      + route_table_id         = (known after apply)
      + state                  = (known after apply)

      + timeouts {
          + create = "5m"
        }
    }

  # module.vpc.aws_route_table.private[0] will be created
  + resource "aws_route_table" "private" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = {
          + "Name" = "karpenter-demo-private"
        }
      + tags_all         = {
          + "Name" = "karpenter-demo-private"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table.public[0] will be created
  + resource "aws_route_table" "public" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = {
          + "Name" = "karpenter-demo-public"
        }
      + tags_all         = {
          + "Name" = "karpenter-demo-public"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[0] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[1] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[2] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[0] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[1] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[2] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_subnet.private[0] will be created
  + resource "aws_subnet" "private" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-2a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.0.0/20"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "karpenter-demo-private-us-west-2a"
          + "karpenter.sh/discovery"          = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "karpenter-demo-private-us-west-2a"
          + "karpenter.sh/discovery"          = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.private[1] will be created
  + resource "aws_subnet" "private" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-2b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.16.0/20"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "karpenter-demo-private-us-west-2b"
          + "karpenter.sh/discovery"          = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "karpenter-demo-private-us-west-2b"
          + "karpenter.sh/discovery"          = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.private[2] will be created
  + resource "aws_subnet" "private" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-2c"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.32.0/20"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "karpenter-demo-private-us-west-2c"
          + "karpenter.sh/discovery"          = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "karpenter-demo-private-us-west-2c"
          + "karpenter.sh/discovery"          = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[0] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-2a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.48.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                   = "karpenter-demo-public-us-west-2a"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                   = "karpenter-demo-public-us-west-2a"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[1] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-2b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.49.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                   = "karpenter-demo-public-us-west-2b"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                   = "karpenter-demo-public-us-west-2b"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[2] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-west-2c"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.50.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                   = "karpenter-demo-public-us-west-2c"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                   = "karpenter-demo-public-us-west-2c"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_vpc.this[0] will be created
  + resource "aws_vpc" "this" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "karpenter-demo"
        }
      + tags_all                             = {
          + "Name" = "karpenter-demo"
        }
    }

  # module.eks.module.eks.data.tls_certificate.this[0] will be read during apply
  # (config refers to values not yet known)
 <= data "tls_certificate" "this" {
      + certificates = (known after apply)
      + id           = (known after apply)
      + url          = (known after apply)
    }

  # module.eks.module.eks.aws_cloudwatch_log_group.this[0] will be created
  + resource "aws_cloudwatch_log_group" "this" {
      + arn               = (known after apply)
      + id                = (known after apply)
      + name              = "/aws/eks/karpenter-demo/cluster"
      + name_prefix       = (known after apply)
      + retention_in_days = 90
      + skip_destroy      = false
      + tags              = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + tags_all          = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
    }

  # module.eks.module.eks.aws_ec2_tag.cluster_primary_security_group["ClusterName"] will be created
  + resource "aws_ec2_tag" "cluster_primary_security_group" {
      + id          = (known after apply)
      + key         = "ClusterName"
      + resource_id = (known after apply)
      + value       = "karpenter-demo"
    }

  # module.eks.module.eks.aws_ec2_tag.cluster_primary_security_group["managed-by"] will be created
  + resource "aws_ec2_tag" "cluster_primary_security_group" {
      + id          = (known after apply)
      + key         = "managed-by"
      + resource_id = (known after apply)
      + value       = "terraform"
    }

  # module.eks.module.eks.aws_eks_cluster.this[0] will be created
  + resource "aws_eks_cluster" "this" {
      + arn                       = (known after apply)
      + certificate_authority     = (known after apply)
      + cluster_id                = (known after apply)
      + created_at                = (known after apply)
      + enabled_cluster_log_types = [
          + "api",
          + "audit",
          + "authenticator",
        ]
      + endpoint                  = (known after apply)
      + id                        = (known after apply)
      + identity                  = (known after apply)
      + name                      = "karpenter-demo"
      + platform_version          = (known after apply)
      + role_arn                  = (known after apply)
      + status                    = (known after apply)
      + tags                      = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + tags_all                  = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + version                   = "1.24"

      + encryption_config {
          + resources = [
              + "secrets",
            ]

          + provider {
              + key_arn = (known after apply)
            }
        }

      + kubernetes_network_config {
          + ip_family         = (known after apply)
          + service_ipv4_cidr = (known after apply)
          + service_ipv6_cidr = (known after apply)
        }

      + timeouts {}

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = true
          + endpoint_public_access    = true
          + public_access_cidrs       = [
              + "0.0.0.0/0",
            ]
          + security_group_ids        = (known after apply)
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # module.eks.module.eks.aws_iam_openid_connect_provider.oidc_provider[0] will be created
  + resource "aws_iam_openid_connect_provider" "oidc_provider" {
      + arn             = (known after apply)
      + client_id_list  = [
          + "sts.amazonaws.com",
        ]
      + id              = (known after apply)
      + tags            = {
          + "ClusterName" = "karpenter-demo"
          + "Name"        = "karpenter-demo-eks-irsa"
          + "managed-by"  = "terraform"
        }
      + tags_all        = {
          + "ClusterName" = "karpenter-demo"
          + "Name"        = "karpenter-demo-eks-irsa"
          + "managed-by"  = "terraform"
        }
      + thumbprint_list = (known after apply)
      + url             = (known after apply)
    }

  # module.eks.module.eks.aws_iam_policy.cluster_encryption[0] will be created
  + resource "aws_iam_policy" "cluster_encryption" {
      + arn         = (known after apply)
      + description = "Cluster encryption policy to allow cluster role to utilize CMK provided"
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = "karpenter-demo-cluster-ClusterEncryption"
      + path        = "/"
      + policy      = (known after apply)
      + policy_id   = (known after apply)
      + tags        = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + tags_all    = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
    }

  # module.eks.module.eks.aws_iam_role.this[0] will be created
  + resource "aws_iam_role" "this" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "eks.amazonaws.com"
                        }
                      + Sid       = "EKSClusterAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "karpenter-demo-cluster-"
      + path                  = "/"
      + tags                  = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + tags_all              = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + unique_id             = (known after apply)

      + inline_policy {
          + name   = "karpenter-demo-cluster"
          + policy = jsonencode(
                {
                  + Statement = [
                      + {
                          + Action   = [
                              + "logs:CreateLogGroup",
                            ]
                          + Effect   = "Deny"
                          + Resource = "*"
                        },
                    ]
                  + Version   = "2012-10-17"
                }
            )
        }
    }

  # module.eks.module.eks.aws_iam_role_policy_attachment.cluster_encryption[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_encryption" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = (known after apply)
    }

  # module.eks.module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = (known after apply)
    }

  # module.eks.module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
      + role       = (known after apply)
    }

  # module.eks.module.eks.aws_security_group.cluster[0] will be created
  + resource "aws_security_group" "cluster" {
      + arn                    = (known after apply)
      + description            = "EKS cluster security group"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "karpenter-demo-cluster-"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "ClusterName" = "karpenter-demo"
          + "Name"        = "karpenter-demo-cluster"
          + "managed-by"  = "terraform"
        }
      + tags_all               = {
          + "ClusterName" = "karpenter-demo"
          + "Name"        = "karpenter-demo-cluster"
          + "managed-by"  = "terraform"
        }
      + vpc_id                 = (known after apply)
    }

  # module.eks.module.eks.aws_security_group.node[0] will be created
  + resource "aws_security_group" "node" {
      + arn                    = (known after apply)
      + description            = "EKS node shared security group"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "karpenter-demo-node-"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "ClusterName"                          = "karpenter-demo"
          + "Name"                                 = "karpenter-demo-node"
          + "kubernetes.io/cluster/karpenter-demo" = "owned"
          + "managed-by"                           = "terraform"
        }
      + tags_all               = {
          + "ClusterName"                          = "karpenter-demo"
          + "Name"                                 = "karpenter-demo-node"
          + "kubernetes.io/cluster/karpenter-demo" = "owned"
          + "managed-by"                           = "terraform"
        }
      + vpc_id                 = (known after apply)
    }

  # module.eks.module.eks.aws_security_group_rule.cluster["egress_nodes_ephemeral_ports_tcp"] will be created
  + resource "aws_security_group_rule" "cluster" {
      + description              = "To node 1025-65535"
      + from_port                = 1025
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "egress"
    }

  # module.eks.module.eks.aws_security_group_rule.cluster["ingress_nodes_443"] will be created
  + resource "aws_security_group_rule" "cluster" {
      + description              = "Node groups to cluster API"
      + from_port                = 443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["egress_all"] will be created
  + resource "aws_security_group_rule" "node" {
      + cidr_blocks              = [
          + "0.0.0.0/0",
        ]
      + description              = "Allow all egress"
      + from_port                = 0
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "-1"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 0
      + type                     = "egress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_cluster_443"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node groups"
      + from_port                = 443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_cluster_4443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 4443/tcp webhook"
      + from_port                = 4443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 4443
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_cluster_8443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 8443/tcp webhook"
      + from_port                = 8443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 8443
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_cluster_9443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 9443/tcp webhook"
      + from_port                = 9443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 9443
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_cluster_kubelet"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node kubelets"
      + from_port                = 10250
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 10250
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_nodes_ephemeral"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node ingress on ephemeral ports"
      + from_port                = 1025
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node CoreDNS"
      + from_port                = 53
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 53
      + type                     = "ingress"
    }

  # module.eks.module.eks.aws_security_group_rule.node["ingress_self_coredns_udp"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node CoreDNS UDP"
      + from_port                = 53
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "udp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 53
      + type                     = "ingress"
    }

  # module.eks.module.eks_managed_node_group.data.aws_caller_identity.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_caller_identity" "current" {
      + account_id = (known after apply)
      + arn        = (known after apply)
      + id         = (known after apply)
      + user_id    = (known after apply)
    }

  # module.eks.module.eks_managed_node_group.data.aws_partition.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_partition" "current" {
      + dns_suffix         = (known after apply)
      + id                 = (known after apply)
      + partition          = (known after apply)
      + reverse_dns_prefix = (known after apply)
    }

  # module.eks.module.eks_managed_node_group.aws_eks_node_group.this[0] will be created
  + resource "aws_eks_node_group" "this" {
      + ami_type               = "AL2_x86_64"
      + arn                    = (known after apply)
      + capacity_type          = "ON_DEMAND"
      + cluster_name           = "karpenter-demo"
      + disk_size              = (known after apply)
      + id                     = (known after apply)
      + instance_types         = [
          + "m6a.4xlarge",
          + "m5a.4xlarge",
          + "m6i.4xlarge",
          + "m5.4xlarge",
        ]
      + node_group_name        = (known after apply)
      + node_group_name_prefix = "karpenter-demo-standard-"
      + node_role_arn          = (known after apply)
      + release_version        = (known after apply)
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = (known after apply)
      + tags                   = {
          + "Name" = "karpenter-demo-standard"
        }
      + tags_all               = {
          + "Name" = "karpenter-demo-standard"
        }
      + version                = "1.24"

      + launch_template {
          + id      = (known after apply)
          + name    = (known after apply)
          + version = (known after apply)
        }

      + scaling_config {
          + desired_size = 3
          + max_size     = 3
          + min_size     = 3
        }

      + timeouts {}

      + update_config {
          + max_unavailable_percentage = 33
        }
    }

  # module.eks.module.eks_managed_node_group.aws_launch_template.this[0] will be created
  + resource "aws_launch_template" "this" {
      + arn                    = (known after apply)
      + default_version        = (known after apply)
      + id                     = (known after apply)
      + latest_version         = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "karpenter-demo-standard-eks-node-group-"
      + tags_all               = (known after apply)
      + update_default_version = true
      + vpc_security_group_ids = (known after apply)

      + block_device_mappings {
          + device_name = "/dev/xvda"

          + ebs {
              + delete_on_termination = "true"
              + encrypted             = "true"
              + iops                  = 3000
              + throughput            = 125
              + volume_size           = 50
              + volume_type           = "gp3"
            }
        }

      + metadata_options {
          + http_endpoint               = "enabled"
          + http_protocol_ipv6          = "disabled"
          + http_put_response_hop_limit = 2
          + http_tokens                 = "required"
          + instance_metadata_tags      = "disabled"
        }

      + monitoring {
          + enabled = true
        }

      + tag_specifications {
          + resource_type = "instance"
          + tags          = {
              + "Name" = "karpenter-demo-standard"
            }
        }
      + tag_specifications {
          + resource_type = "network-interface"
          + tags          = {
              + "Name" = "karpenter-demo-standard"
            }
        }
      + tag_specifications {
          + resource_type = "volume"
          + tags          = {
              + "Name" = "karpenter-demo-standard"
            }
        }
    }

  # module.eks.module.managed_node_group_role.aws_iam_role.nodes will be created
  + resource "aws_iam_role" "nodes" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                      + Sid       = "EKSNodeAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + description           = "IAM Role for EKS Nodes in the karpenter-demo cluster"
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "karpenter-demo-eks-node-group"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy {
          + name   = (known after apply)
          + policy = (known after apply)
        }
    }

  # module.eks.module.managed_node_group_role.aws_iam_role_policy_attachment.this["AmazonEC2ContainerRegistryReadOnly"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = "karpenter-demo-eks-node-group"
    }

  # module.eks.module.managed_node_group_role.aws_iam_role_policy_attachment.this["AmazonEKSWorkerNodePolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = "karpenter-demo-eks-node-group"
    }

  # module.eks.module.managed_node_group_role.aws_iam_role_policy_attachment.this["AmazonEKS_CNI_Policy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = "karpenter-demo-eks-node-group"
    }

  # module.eks.module.managed_node_group_role.aws_iam_role_policy_attachment.this["AmazonSSMManagedInstanceCore"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
      + role       = "karpenter-demo-eks-node-group"
    }

  # module.eks.module.eks.module.kms.data.aws_iam_policy_document.this[0] will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "this" {
      + id                        = (known after apply)
      + json                      = (known after apply)
      + override_policy_documents = []
      + source_policy_documents   = []

      + statement {
          + actions   = [
              + "kms:CancelKeyDeletion",
              + "kms:Create*",
              + "kms:Delete*",
              + "kms:Describe*",
              + "kms:Disable*",
              + "kms:Enable*",
              + "kms:Get*",
              + "kms:List*",
              + "kms:Put*",
              + "kms:Revoke*",
              + "kms:ScheduleKeyDeletion",
              + "kms:TagResource",
              + "kms:UntagResource",
              + "kms:Update*",
            ]
          + resources = [
              + "*",
            ]
          + sid       = "KeyAdministration"

          + principals {
              + identifiers = [
                  + "arn:aws:iam::763916856451:role/aws-reserved/sso.amazonaws.com/us-west-2/AWSReservedSSO_AWSAdministratorAccess_8041dea1708bf70e",
                ]
              + type        = "AWS"
            }
        }
      + statement {
          + actions   = [
              + "kms:Decrypt",
              + "kms:DescribeKey",
              + "kms:Encrypt",
              + "kms:GenerateDataKey*",
              + "kms:ReEncrypt*",
            ]
          + resources = [
              + "*",
            ]
          + sid       = "KeyUsage"

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "AWS"
            }
        }
    }

  # module.eks.module.eks.module.kms.aws_kms_alias.this["cluster"] will be created
  + resource "aws_kms_alias" "this" {
      + arn            = (known after apply)
      + id             = (known after apply)
      + name           = "alias/eks/karpenter-demo"
      + name_prefix    = (known after apply)
      + target_key_arn = (known after apply)
      + target_key_id  = (known after apply)
    }

  # module.eks.module.eks.module.kms.aws_kms_key.this[0] will be created
  + resource "aws_kms_key" "this" {
      + arn                                = (known after apply)
      + bypass_policy_lockout_safety_check = false
      + customer_master_key_spec           = "SYMMETRIC_DEFAULT"
      + description                        = "karpenter-demo cluster encryption key"
      + enable_key_rotation                = true
      + id                                 = (known after apply)
      + is_enabled                         = true
      + key_id                             = (known after apply)
      + key_usage                          = "ENCRYPT_DECRYPT"
      + multi_region                       = false
      + policy                             = (known after apply)
      + tags                               = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
      + tags_all                           = {
          + "ClusterName" = "karpenter-demo"
          + "managed-by"  = "terraform"
        }
    }

Plan: 52 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
wbeckwith@overwatch githib-2337 % 

@wpbeckwith
Copy link

BTW, what triggers this failure is the managed node group depending on the managed_node_group_role module. That's not how we have things IRL. IRL we have another module used in between the creation of the cluster and the creation of the MNG but that would have been too much code to try and strip down to an example.

@bryantbiggs
Copy link
Member

This is GitHub, you can create a repo or gist if necessary - it's not safe opening zips off the internet

@wpbeckwith
Copy link

Here's a repo with error. https://github.com/wpbeckwith/terraform-aws-eks-2337

@bryantbiggs
Copy link
Member

Here's a repo with error. https://github.com/wpbeckwith/terraform-aws-eks-2337

Thank you for sharing that. That is quite a bit to unpack - may I ask what your motivation was for setting up your code this way?

@wpbeckwith
Copy link

We have our terraform configured to build and provision a cluster in one shot without the need to run the config multiple times with the -target option to get around the issue where terraform will lose the ability to manipulate the aws-auth map. So what we do is

  1. Create an EKS cluster without a MNG
  2. We have a eks-auth-map module that uses the kbst provider to set the aws-auth map with all the required roles/users for provisioning
  3. Create an MNG with a dependecy on the eks-auth-map module
  4. Rest of cluster setup.

Until EKS provides a better api, this works.

@bryantbiggs
Copy link
Member

Isn't this possible with the latest module today?

@wpbeckwith
Copy link

Actually the only reason we have a separate eks-node-role module is that without it, it will also trigger the for_each issue. in our setup So we just create the role ourselves and pass that into the MNG module.

@wpbeckwith
Copy link

No

Isn't this possible with the latest module today?

No because as soon as you make the MNG depend on the eks-auth-map module you will trigger the for_each issue. However, with the change @timtorChen identified of moving the conditional check to the front of the for_each, the code evaluates to false and only an empty map is given to the for_each and terraform is then happy.

@bryantbiggs
Copy link
Member

Have you seen our examples? These do not have the issues you are speaking of which is why I am curious as to why your setup is so piecemeal

@bryantbiggs
Copy link
Member

No

Isn't this possible with the latest module today?

No because as soon as you make the MNG depend on the eks-auth-map module you will trigger the for_each issue. However, with the change @timtorChen identified of moving the conditional check to the front of the for_each, the code evaluates to false and only an empty map is given to the for_each and terraform is then happy.

In Terraform, you do not set explicit depends_on in a module instantiation. This is very problematic and disruptive to resources

@wpbeckwith
Copy link

I have seen the latest examples. I'm in the process of updating our modules to take advantage of all the changes done in 19.5.1. In some cases we can remove code that we had (i.e. karpenter) but in others we still need to separate the pieces as I have stated.

If you take my repo and do a terraform init && terraform plan then it will fail. If you then comment out the module dependency in the eks-cluster/main.tf file then it will work.

@bryantbiggs
Copy link
Member

And herein lies the problem - you do not need the depends_on because you are already passing in a computed value and Terraform will resolve this first https://github.com/wpbeckwith/terraform-aws-eks-2337/blob/1bbf2c76562bacda487c5bbbb26dbd47789f3762/eks-cluster2/main.tf#L129

And 2, as I stated above, Terraform states this is not recommended because its problematic hashicorp/terraform#26383 (comment)

So remove the depends_on and problem solved

@wpbeckwith
Copy link

In Terraform, you do not set explicit depends_on in a module instantiation. This is very problematic and disruptive to resources
I feel like you are debating the purity of terraform and I'm dealing with the reality of it. AFAICT terraform tracks the dependency of variables and as they become available that unblocks other resources/modules using those variable. Really fantastic engineering. However because things are not deterministic , there can be random failures where things are not updated before something else. This is the reality we have experienced until we separated things out. BTW, this code was working with 18.26.6 which is where we are upgrading from.

@wpbeckwith
Copy link

I

And herein lies the problem - you do not need the depends_on because you are already passing in a computed value and Terraform will resolve this first https://github.com/wpbeckwith/terraform-aws-eks-2337/blob/1bbf2c76562bacda487c5bbbb26dbd47789f3762/eks-cluster2/main.tf#L129

And 2, as I stated above, Terraform states this is not recommended because its problematic hashicorp/terraform#26383 (comment)

So remove the depends_on and problem solved

I know that the depends_on in this module is not needed. I only added it there because it saved me from having to add in our actual eks-auth-module and then needing to remove any company related blah blah blah. If I add that modules, then we really do want the MNG to depend on it and they currently share zero attributes so an implicit dependency is not possible, only an explicit one works.

If you really need to see that module in action then I will add it to the repo, as I really want to get this issue resolved.

@bryantbiggs
Copy link
Member

I feel like you are debating the purity of terraform and I'm dealing with the reality of it.

No, I am trying to get to the bottom of whether there is a real issue in the module or not. Having to work backwards from zero information to a plethora of various bits of code all fragmented about in a very odd setup is not easy. As I said before, the examples we provide are not only examples but what we use for tests. If you are able to modify one of our examples to demonstrate the error, this would be very useful. However, that will NEVER negate the fact that using an explicit depends_on in a module instantiation is very problematic and the only course of action is to remove it and find an alternate means (99% of the time in my experience, these are redundant over implicit dependencies and therefore were not causing any benefit and only harm - hence the course of action is to simply remove)

@bryantbiggs
Copy link
Member

I know that the depends_on in this module is not needed. I only added it there because it saved me from having to add in our actual eks-auth-module and then needing to remove any company related blah blah blah.

Please provide a representable reproduction that demonstrates the issue, ideally using one of our examples that has been modified to highlight the unintended behavior. Until then, I don't have anything further to offer

@bryantbiggs
Copy link
Member

Here is a reproduction that I think is close to some of the configs shown here but it does not show any of the errors shown by others above. Please feel free to use to modify and demonstrate the issue in question

https://github.com/bryantbiggs/how-to-create-reproduction

@bryantbiggs
Copy link
Member

any update on the reproduction? is this still an issue?

@thomasschuiki
Copy link

I also encountered a similar error message after upgrading from 0.18 to 0.19.
My issue was that i was using the random_string resource in the cluster name. This resource has to be applied beforehand. After that everything worked as expected.

@sanarena
Copy link

I face this issue and @wpbeckwith patch in #2388 solved it for me. thanks for the patch!

@AndreiBanaruTakeda
Copy link
Author

any update on the reproduction? is this still an issue?

Apologies, I am having difficulties to offer a simple reproduction due to the complexity of our module. Will try and offer one soon.

What I found out today is that if here - I replace ${data.aws_partition.current.partition} with aws I no longer have the reported errors.

@AndreiBanaruTakeda
Copy link
Author

@bryantbiggs here is a reproduction

I have identified that the root cause is the fact that we use depends_on in our module, if I remove depends_on errors will go away.

We have our own module for the CNI plugin and we need the nodes group module to wait for the CNI module to finish.

@bryantbiggs
Copy link
Member

As I have stated before, Terraform advises against this because it is known to cause issues hashicorp/terraform#26383 (comment)

The data sources will not be removed from this module and therefore depends_on within this module declaration will almost always result in errors or disruptive behavior

@AndreiBanaruTakeda
Copy link
Author

Understood, thanks!
I refactored the code to use "fake" implicit dependency - https://medium.com/hashicorp-engineering/creating-module-dependencies-in-terraform-0-13-4322702dac4a

@vuskeedoo
Copy link

vuskeedoo commented Feb 8, 2023

I am running into this issue when using random_string in tags. This code worked in 18.31.2

│ Error: Invalid for_each argument
│ 
│   on .terraform/modules/example_eks.eks/main.tf line 96, in resource "aws_ec2_tag" "cluster_primary_security_group":
│   96:   for_each = { for k, v in merge(var.tags, var.cluster_tags) :
│   97:     k => v if local.create && k != "Name" && var.create_cluster_primary_security_group_tags && v != null
│   98:   }
│     ├────────────────
│     │ local.create is true
│     │ var.cluster_tags is empty map of string
│     │ var.create_cluster_primary_security_group_tags is true
│     │ var.tags is map of string with 3 elements
│ 
│ The "for_each" map includes keys derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.
│ 
│ When working with unknown values in for_each, it's better to define the map
│ keys statically in your configuration and place apply-time results only in
│ the map values.
│ 
│ Alternatively, you could use the -target planning option to first apply
│ only the resources that the for_each value depends on, and then apply a
│ second time to fully converge.
╵
Operation failed: failed running terraform plan (exit 1)

Code snippet:

locals {
  cluster_id = lower("${var.environment}-${var.cluster_name}-${random_string.eks_suffix.result}")

  tags = merge(
    var.tags,
    {
      Environment = var.environment
      # Name = local.cluster_id
      Service = "eks"
    }
  )
}

resource "random_string" "eks_suffix" {
  length  = 8
  special = false
}

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants