Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Correct remote access variable for security groups and add example for additional IAM policies #1766

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,39 @@ Terraform module which creates AWS EKS (Kubernetes) resources
- Support for providing maps of node groups/Fargate profiles to the cluster module definition or use separate node group/Fargate profile sub-modules
- Provisions to provide node group/Fargate profile "default" settings - useful for when creating multiple node groups/Fargate profiles where you want to set a common set of configurations once, and then individual control only select features

### ℹ️ `Error: Invalid for_each argument ...`

Users may encounter an error such as `Error: Invalid for_each argument - The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply ...`

This error is due to an upstream issue with [Terraform core](https://github.com/hashicorp/terraform/issues/4149). There are two potential options you can take to help mitigate this issue:

1. Create the dependent resources before the cluster => `terraform apply -target <your policy or your security group>` and then `terraform apply` for the cluster (or other similar means to just ensure the referenced resources exist before creating the cluster)
- Note: this is the route users will have to take for adding additonal security groups to nodes since there isn't a separate "security group attachment" resource
2. For addtional IAM policies, users can attach the policies outside of the cluster definition as demonstrated below

```hcl
resource "aws_iam_role_policy_attachment" "additional" {
for_each = module.eks.eks_managed_node_groups
# you could also do the following or any comibination:
# for_each = merge(
# module.eks.eks_managed_node_groups,
# module.eks.self_managed_node_group,
# module.eks.fargate_profile,
# )

# This policy does not have to exist at the time of cluster creation. Terraform can
# deduce the proper order of its creation to avoid errors during creation
policy_arn = aws_iam_policy.node_additional.arn
role = each.value.iam_role_name
}
```

The tl;dr for this issue is that the Terraform resource passed into the modules map definition *must* be known before you can apply the EKS module. The variables this potentially affects are:

- `cluster_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule)
- `node_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule)
- `iam_role_additional_policies` (i.e. - referencing an external policy resource)

## Usage

```hcl
Expand Down
3 changes: 3 additions & 0 deletions examples/eks_managed_node_group/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,11 +53,14 @@ Note that this example may create resources which cost money. Run `terraform des

| Name | Type |
|------|------|
| [aws_iam_policy.node_additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_role_policy_attachment.additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource |
| [aws_key_pair.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/key_pair) | resource |
| [aws_kms_key.ebs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_key) | resource |
| [aws_kms_key.eks](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_key) | resource |
| [aws_launch_template.external](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_template) | resource |
| [aws_security_group.additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_security_group.remote_access](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [null_resource.patch](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [tls_private_key.this](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
Expand Down
59 changes: 58 additions & 1 deletion examples/eks_managed_node_group/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@ module "eks" {

# Remote access cannot be specified with a launch template
remote_access = {
ec2_ssh_key = aws_key_pair.this.key_name
ec2_ssh_key = aws_key_pair.this.key_name
source_security_group_ids = [aws_security_group.remote_access.id]
}
}

Expand Down Expand Up @@ -269,6 +270,18 @@ module "eks" {
tags = local.tags
}

# References to resources that do not exist yet when creating a cluster will cause a plan failure due to https://github.com/hashicorp/terraform/issues/4149
# There are two options users can take
# 1. Create the dependent resources before the cluster => `terraform apply -target <your policy or your security group> and then `terraform apply`
# Note: this is the route users will have to take for adding additonal security groups to nodes since there isn't a separate "security group attachment" resource
# 2. For addtional IAM policies, users can attach the policies outside of the cluster definition as demonstrated below
resource "aws_iam_role_policy_attachment" "additional" {
for_each = module.eks.eks_managed_node_groups

policy_arn = aws_iam_policy.node_additional.arn
role = each.value.iam_role_name
}

################################################################################
# aws-auth configmap
# Only EKS managed node groups automatically add roles to aws-auth configmap
Expand Down Expand Up @@ -529,3 +542,47 @@ resource "aws_key_pair" "this" {

tags = local.tags
}

resource "aws_security_group" "remote_access" {
name_prefix = "${local.name}-remote-access"
description = "Allow remote SSH access"
vpc_id = module.vpc.vpc_id

ingress {
description = "SSH access"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

tags = local.tags
}

resource "aws_iam_policy" "node_additional" {
name = "${local.name}-additional"
description = "Example usage of node additional policy"

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
]
Effect = "Allow"
Resource = "*"
},
]
})

tags = local.tags
}
2 changes: 1 addition & 1 deletion modules/eks-managed-node-group/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ module "eks_managed_node_group" {
| <a name="input_post_bootstrap_user_data"></a> [post\_bootstrap\_user\_data](#input\_post\_bootstrap\_user\_data) | User data that is appended to the user data script after of the EKS bootstrap script. Not used when `platform` = `bottlerocket` | `string` | `""` | no |
| <a name="input_pre_bootstrap_user_data"></a> [pre\_bootstrap\_user\_data](#input\_pre\_bootstrap\_user\_data) | User data that is injected into the user data script ahead of the EKS bootstrap script. Not used when `platform` = `bottlerocket` | `string` | `""` | no |
| <a name="input_ram_disk_id"></a> [ram\_disk\_id](#input\_ram\_disk\_id) | The ID of the ram disk | `string` | `null` | no |
| <a name="input_remote_access"></a> [remote\_access](#input\_remote\_access) | Configuration block with remote access settings | `map(string)` | `{}` | no |
| <a name="input_remote_access"></a> [remote\_access](#input\_remote\_access) | Configuration block with remote access settings | `any` | `{}` | no |
| <a name="input_security_group_description"></a> [security\_group\_description](#input\_security\_group\_description) | Description for the security group created | `string` | `"EKS managed node group security group"` | no |
| <a name="input_security_group_name"></a> [security\_group\_name](#input\_security\_group\_name) | Name to use on security group created | `string` | `null` | no |
| <a name="input_security_group_rules"></a> [security\_group\_rules](#input\_security\_group\_rules) | List of security group rules to add to the security group created | `any` | `{}` | no |
Expand Down
2 changes: 1 addition & 1 deletion modules/eks-managed-node-group/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ variable "launch_template_version" {

variable "remote_access" {
description = "Configuration block with remote access settings"
type = map(string)
type = any
default = {}
}

Expand Down