Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gke-node-pool default name conflict fixed #3127

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion modules/compute/gke-node-pool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -299,12 +299,13 @@ limitations under the License.
| <a name="input_host_maintenance_interval"></a> [host\_maintenance\_interval](#input\_host\_maintenance\_interval) | Specifies the frequency of planned maintenance events. | `string` | `""` | no |
| <a name="input_image_type"></a> [image\_type](#input\_image\_type) | The default image type used by NAP once a new node pool is being created. Use either COS\_CONTAINERD or UBUNTU\_CONTAINERD. | `string` | `"COS_CONTAINERD"` | no |
| <a name="input_initial_node_count"></a> [initial\_node\_count](#input\_initial\_node\_count) | The initial number of nodes for the pool. In regional clusters, this is the number of nodes per zone. Changing this setting after node pool creation will not make any effect. It cannot be set with static\_node\_count and must be set to a value between autoscaling\_total\_min\_nodes and autoscaling\_total\_max\_nodes. | `number` | `null` | no |
| <a name="input_internal_ghpc_module_id"></a> [internal\_ghpc\_module\_id](#input\_internal\_ghpc\_module\_id) | DO NOT SET THIS MANUALLY. Automatically populates with module id (unique blueprint-wide). | `string` | n/a | yes |
| <a name="input_kubernetes_labels"></a> [kubernetes\_labels](#input\_kubernetes\_labels) | Kubernetes labels to be applied to each node in the node group. Key-value pairs. <br/>(The `kubernetes.io/` and `k8s.io/` prefixes are reserved by Kubernetes Core components and cannot be specified) | `map(string)` | `null` | no |
| <a name="input_labels"></a> [labels](#input\_labels) | GCE resource labels to be applied to resources. Key-value pairs. | `map(string)` | n/a | yes |
| <a name="input_local_ssd_count_ephemeral_storage"></a> [local\_ssd\_count\_ephemeral\_storage](#input\_local\_ssd\_count\_ephemeral\_storage) | The number of local SSDs to attach to each node to back ephemeral storage.<br/>Uses NVMe interfaces. Must be supported by `machine_type`.<br/>When set to null, default value either is [set based on machine\_type](https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds) or GKE decides about default value.<br/>[See above](#local-ssd-storage) for more info. | `number` | `null` | no |
| <a name="input_local_ssd_count_nvme_block"></a> [local\_ssd\_count\_nvme\_block](#input\_local\_ssd\_count\_nvme\_block) | The number of local SSDs to attach to each node to back block storage.<br/>Uses NVMe interfaces. Must be supported by `machine_type`.<br/>When set to null, default value either is [set based on machine\_type](https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds) or GKE decides about default value.<br/>[See above](#local-ssd-storage) for more info. | `number` | `null` | no |
| <a name="input_machine_type"></a> [machine\_type](#input\_machine\_type) | The name of a Google Compute Engine machine type. | `string` | `"c2-standard-60"` | no |
| <a name="input_name"></a> [name](#input\_name) | The name of the node pool. If left blank, will default to the machine type. | `string` | `null` | no |
| <a name="input_name"></a> [name](#input\_name) | The name of the node pool. If not set, automatically populated by machine type and module id (unique blueprint-wide) as suffix.<br/>If setting manually, ensure a unique value across all gke-node-pools. | `string` | `null` | no |
| <a name="input_placement_policy"></a> [placement\_policy](#input\_placement\_policy) | Group placement policy to use for the node pool's nodes. `COMPACT` is the only supported value for `type` currently. `name` is the name of the placement policy.<br/>It is assumed that the specified policy exists. To create a placement policy refer to https://cloud.google.com/sdk/gcloud/reference/compute/resource-policies/create/group-placement.<br/>Note: Placement policies have the [following](https://cloud.google.com/compute/docs/instances/placement-policies-overview#restrictions-compact-policies) restrictions. | <pre>object({<br/> type = string<br/> name = optional(string)<br/> })</pre> | <pre>{<br/> "name": null,<br/> "type": null<br/>}</pre> | no |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | The project ID to host the cluster in. | `string` | n/a | yes |
| <a name="input_reservation_affinity"></a> [reservation\_affinity](#input\_reservation\_affinity) | Reservation resource to consume. When targeting SPECIFIC\_RESERVATION, specific\_reservations needs be specified.<br/>Even though specific\_reservations is a list, only one reservation is allowed by the NodePool API.<br/>It is assumed that the specified reservation exists and has available capacity.<br/>For a shared reservation, specify the project\_id as well in which it was created.<br/>To create a reservation refer to https://cloud.google.com/compute/docs/instances/reservations-single-project and https://cloud.google.com/compute/docs/instances/reservations-shared | <pre>object({<br/> consume_reservation_type = string<br/> specific_reservations = optional(list(object({<br/> name = string<br/> project = optional(string)<br/> })))<br/> })</pre> | <pre>{<br/> "consume_reservation_type": "NO_RESERVATION",<br/> "specific_reservations": []<br/>}</pre> | no |
Expand Down
4 changes: 3 additions & 1 deletion modules/compute/gke-node-pool/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ locals {
autoscale_set = var.autoscaling_total_min_nodes != 0 || var.autoscaling_total_max_nodes != 1000
static_node_set = var.static_node_count != null
initial_node_set = try(var.initial_node_count > 0, false)

module_unique_id = replace(lower(var.internal_ghpc_module_id), "/[^a-z0-9\\-]/", "")
}

data "google_compute_default_service_account" "default_sa" {
Expand All @@ -42,7 +44,7 @@ data "google_compute_default_service_account" "default_sa" {
resource "google_container_node_pool" "node_pool" {
provider = google-beta

name = var.name == null ? var.machine_type : var.name
name = coalesce(var.name, "${var.machine_type}-${local.module_unique_id}")
cluster = var.cluster_id
node_locations = var.zones

Expand Down
2 changes: 2 additions & 0 deletions modules/compute/gke-node-pool/metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,5 @@ spec:
requirements:
services:
- container.googleapis.com
ghpc:
inject_module_id: internal_ghpc_module_id
10 changes: 9 additions & 1 deletion modules/compute/gke-node-pool/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,19 @@ variable "zones" {
}

variable "name" {
description = "The name of the node pool. If left blank, will default to the machine type."
description = <<-EOD
The name of the node pool. If not set, automatically populated by machine type and module id (unique blueprint-wide) as suffix.
If setting manually, ensure a unique value across all gke-node-pools.
EOD
type = string
default = null
}

variable "internal_ghpc_module_id" {
description = "DO NOT SET THIS MANUALLY. Automatically populates with module id (unique blueprint-wide)."
type = string
}

variable "machine_type" {
description = "The name of a Google Compute Engine machine type."
type = string
Expand Down
Loading