Skip to content

Commit

Permalink
Improve Reservation Validation Error Message
Browse files Browse the repository at this point in the history
  • Loading branch information
arajmane-g committed Oct 29, 2024
1 parent cb721a3 commit bc23059
Show file tree
Hide file tree
Showing 3 changed files with 52 additions and 3 deletions.
34 changes: 34 additions & 0 deletions modules/compute/gke-node-pool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,40 @@ Finally, the following is adding multivpc to a node pool:
...
```

## Using GCE Reservations
You can reserve Google Compute Engine instances in a specific zone to ensure resources are available for their workloads when needed. For more details on how to manage reservations, see [Reserving Compute Engine zonal resources](https://cloud.google.com/compute/docs/instances/reserving-zonal-resources).

After creating a reservation, you can consume the reserved GCE VM instances in GKE. GKE clusters deployed using Cluster Toolkit support the same consumption modes as Compute Engine: NO_RESERVATION(default), ANY_RESERVATION, SPECIFIC_RESERVATION.

This can be accomplished using [`reservation_affinity`](https://github.com/GoogleCloudPlatform/cluster-toolkit/blob/main/modules/compute/gke-node-pool/README.md#input_reservation_affinity).

```yaml
# Target any reservation
reservation_affinity:
consume_reservation_type: ANY_RESERVATION
# Target a specific reservation
reservation_affinity:
consume_reservation_type: SPECIFIC_RESERVATION
specific_reservations:
- name: specific-reservation-1
```

The following requirements need to be satisfied for the node pool nodes to be able to use a specific reservation:
1. A reservation with the name must exist in the specified project(`var.project_id`) and one of the specified zones(`var.zones`).
2. Its consumption type must be `specific`.
3. Its GCE VM Properties must match with those of the Node Pool; Machine type, Accelerators (GPU Type and count), Local SSD disk type and count.

If you want to utilise a shared reservation, the owner project of the shared reservation needs to be explicitly specified like the following. Note that a shared reservation can be used by the project that hosts the reservation (owner project) and by the projects the reservation is shared with (consumer projects). See how to [create and use a shared reservation](https://cloud.google.com/compute/docs/instances/reservations-shared).

```yaml
reservation_affinity:
consume_reservation_type: SPECIFIC_RESERVATION
specific_reservations:
- name: specific-reservation-shared
project: shared_reservation_owner_project_id
```

## License

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
Expand Down
9 changes: 6 additions & 3 deletions modules/compute/gke-node-pool/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -225,9 +225,12 @@ resource "google_container_node_pool" "node_pool" {
)
error_message = <<-EOT
Check if your reservation is configured correctly:
1. A reservation with the name must exist in the specified project and one of the specified zones
2. Its consumption type must be "specific"
3. Its VM Properties must match with those of the Node Pool; Machine type, Accelerators (GPU Type and count), Local SSD disk type and count
- A reservation with the name must exist in the specified project and one of the specified zones
- Its consumption type must be "specific"
%{for property in local.specific_reservation_requirement_violations}
- ${local.specific_reservation_requirement_violation_messages[property]}
%{endfor}
EOT
}
}
Expand Down
12 changes: 12 additions & 0 deletions modules/compute/gke-node-pool/reservation_definitions.tf
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,16 @@ locals {
# Know that in map comparison the order of keys does not matter. That is {NVME: x, SCSI: y} and {SCSI: y, NVME: x} are equal
# As of this writing, there is only one reservation supported by the Node Pool API. So, directly accessing it from the list
specific_reservation_requirement_violations = length(local.reservation_vm_properties) == 0 ? [] : [for k, v in local.nodepool_vm_properties : k if v != local.reservation_vm_properties[0][k]]

specific_reservation_requirement_violation_messages = {
"machine_type" : <<-EOT
The reservation has "${try(local.reservation_vm_properties[0].machine_type, "")}" machine type and the node pool has "${local.nodepool_vm_properties.machine_type}". Check the relevant node pool setting: "machine_type"
EOT
"guest_accelerators" : <<-EOT
The reservation has ${jsonencode(try(local.reservation_vm_properties[0].guest_accelerators, {}))} accelerators and the node pool has ${jsonencode(try(local.nodepool_vm_properties.guest_accelerators, {}))}. Check the relevant node pool setting: "guest_accelerator". When unspecified, for the machine_type=${var.machine_type}, the default is guest_accelerator=${jsonencode(try(local.generated_guest_accelerator, [{}]))}.
EOT
"local_ssds" : <<-EOT
The reservation has ${jsonencode(try(local.reservation_vm_properties[0].local_ssds, {}))} local SSDs and the node pool has ${jsonencode(try(local.nodepool_vm_properties.local_ssds, {}))}. Check the relevant node pool settings: {local_ssd_count_ephemeral_storage, local_ssd_count_nvme_block}. When unspecified, for the machine_type=${var.machine_type} the defaults are: {local_ssd_count_ephemeral_storage=${coalesce(local.generated_local_ssd_config.local_ssd_count_ephemeral_storage, 0)}, local_ssd_count_nvme_block=${coalesce(local.generated_local_ssd_config.local_ssd_count_nvme_block, 0)}}.
EOT
}
}

0 comments on commit bc23059

Please sign in to comment.