-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Invalid for_each argument #2337
Comments
We'll need a reproduction that we can try out. This has variables all over the place that are unknown |
Same error after upgrading from 18.20.x to 19.0.4 but with the
NOTE: downgrading back to 18.31.2 resolves the issue |
I have the same issue using the eks-blueprints v4.18.1 which uses this modules version v18.29.1 Error:
Also relating to: #1753 |
v18 of EKS is not related here since it had a known issue with computed values for IAM policies and security groups. v19 corrected that behavior |
Same here, on the Jenkins Infrastructure project, the upgrade from 18.x to 19.y breaks with this error unless we comment out the |
…dules/terraform-aws-eks#2337 Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
* chore: Updated the content of the file "/tmp/updatecli/github/jenkins... ... -infra/aws/eks-cluster.tf" Updated the content of the file "/tmp/updatecli/github/jenkins-infra/aws/eks-public-cluster.tf" Made with ❤️️ by updatecli * chore: bump shared tools to latest version Signed-off-by: Damien Duportal <damien.duportal@gmail.com> * chore: comment out EKS module global tags because of terraform-aws-modules/terraform-aws-eks#2337 Signed-off-by: Damien Duportal <damien.duportal@gmail.com> Signed-off-by: Damien Duportal <damien.duportal@gmail.com> Co-authored-by: Jenkins Infra Bot (updatecli) <60776566+jenkins-infra-bot@users.noreply.github.com> Co-authored-by: Damien Duportal <damien.duportal@gmail.com>
I face the same issue. However, I think it is a little bit weird, adding the attribute
Maybe take the condition out of resource "aws_iam_role_policy_attachment" "this" {
# for_each = { for k, v in toset(compact([
# "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
# "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
# var.iam_role_attach_cni_policy ? local.cni_policy : "",
# ])) : k => v if var.create && var.create_iam_role }
for_each = var.create && var.create_iam_role ? { for k, v in toset(compact([
"${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
"${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
var.iam_role_attach_cni_policy ? local.cni_policy : "",
])) : k => v } : {}
policy_arn = each.value
role = aws_iam_role.this[0].name
} |
The above code of moving the condition check out of the for_each does work. This should be standard syntax for all the resources with for_each. Or even better for_each should be made smarter. If it was a choice of fixing for_each or going all of 2023 without any other updates, I'd take for_each hands down. |
Without a proper reproduction that can be executed to show the error, we won't be able to offer any guidance All I can say is that the examples we have provided in this project do work as intended |
OK, here's what I got. In the following output we can see a hacked up version of the karpenter.sh terraform config that creates a VPC, EKS cluster and external managed node group. After failure I then use atom to open the local file and make the above change and now terraform can successfully plan and apply.
|
BTW, what triggers this failure is the managed node group depending on the managed_node_group_role module. That's not how we have things IRL. IRL we have another module used in between the creation of the cluster and the creation of the MNG but that would have been too much code to try and strip down to an example. |
This is GitHub, you can create a repo or gist if necessary - it's not safe opening zips off the internet |
Here's a repo with error. https://github.com/wpbeckwith/terraform-aws-eks-2337 |
Thank you for sharing that. That is quite a bit to unpack - may I ask what your motivation was for setting up your code this way? |
We have our terraform configured to build and provision a cluster in one shot without the need to run the config multiple times with the -target option to get around the issue where terraform will lose the ability to manipulate the aws-auth map. So what we do is
Until EKS provides a better api, this works. |
Isn't this possible with the latest module today? |
Actually the only reason we have a separate eks-node-role module is that without it, it will also trigger the for_each issue. in our setup So we just create the role ourselves and pass that into the MNG module. |
No
No because as soon as you make the MNG depend on the eks-auth-map module you will trigger the for_each issue. However, with the change @timtorChen identified of moving the conditional check to the front of the for_each, the code evaluates to false and only an empty map is given to the for_each and terraform is then happy. |
Have you seen our examples? These do not have the issues you are speaking of which is why I am curious as to why your setup is so piecemeal |
In Terraform, you do not set explicit |
I have seen the latest examples. I'm in the process of updating our modules to take advantage of all the changes done in 19.5.1. In some cases we can remove code that we had (i.e. karpenter) but in others we still need to separate the pieces as I have stated. If you take my repo and do a |
And herein lies the problem - you do not need the And 2, as I stated above, Terraform states this is not recommended because its problematic hashicorp/terraform#26383 (comment) So remove the |
|
I
I know that the depends_on in this module is not needed. I only added it there because it saved me from having to add in our actual eks-auth-module and then needing to remove any company related blah blah blah. If I add that modules, then we really do want the MNG to depend on it and they currently share zero attributes so an implicit dependency is not possible, only an explicit one works. If you really need to see that module in action then I will add it to the repo, as I really want to get this issue resolved. |
No, I am trying to get to the bottom of whether there is a real issue in the module or not. Having to work backwards from zero information to a plethora of various bits of code all fragmented about in a very odd setup is not easy. As I said before, the examples we provide are not only examples but what we use for tests. If you are able to modify one of our examples to demonstrate the error, this would be very useful. However, that will NEVER negate the fact that using an explicit |
Please provide a representable reproduction that demonstrates the issue, ideally using one of our examples that has been modified to highlight the unintended behavior. Until then, I don't have anything further to offer |
Here is a reproduction that I think is close to some of the configs shown here but it does not show any of the errors shown by others above. Please feel free to use to modify and demonstrate the issue in question |
any update on the reproduction? is this still an issue? |
I also encountered a similar error message after upgrading from 0.18 to 0.19. |
I face this issue and @wpbeckwith patch in #2388 solved it for me. thanks for the patch! |
Apologies, I am having difficulties to offer a simple reproduction due to the complexity of our module. Will try and offer one soon. What I found out today is that if here - I replace |
@bryantbiggs here is a reproduction I have identified that the root cause is the fact that we use depends_on in our module, if I remove depends_on errors will go away. We have our own module for the CNI plugin and we need the nodes group module to wait for the CNI module to finish. |
As I have stated before, Terraform advises against this because it is known to cause issues hashicorp/terraform#26383 (comment) The data sources will not be removed from this module and therefore |
Understood, thanks! |
I am running into this issue when using
Code snippet:
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the
examples/*
directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by runningterraform init && terraform apply
without any further changes.If your request is for a new feature, please use the
Feature request
template.Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]: 19.0.4
Terraform version: 1.2.7
Reproduction Code [Required]
Steps to reproduce the behavior:
Expected behavior
Should create X number of managed node groups.
Actual behavior
TFE throws error that it can not create the X number of node group.
Terminal Output Screenshot(s)
Additional context
This happened after the upgrade from 18.31.2 to 19.0.4.
The text was updated successfully, but these errors were encountered: