Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for disabling Legacy Metadata API endpoints on GKE Node Pools #2626

Closed
bgeesaman opened this issue Dec 11, 2018 · 18 comments
Closed
Assignees

Comments

@bgeesaman
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.

Description

There is now an option to prevent the older metadata API versions from being available to GKE node pools. This can help thwart the simpler SSRF attack types from being able to reach the potentially sensitive metadata attributes (especially if the metadata concealment proxy is not in use).

Suggest adding a flag to the node_config > workload_metadata_config block to allow for control over this feature.

New or Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Potential Terraform Configuration

   node_config {
      disk_size_gb = 500
      machine_type = "n1-standard-1"

      workload_metadata_config {
        node_metadata = "SECURE"
        disable_legacy_endpoints = true
      }
    }

References

@ghost ghost added the enhancement label Dec 11, 2018
@nat-henderson
Copy link
Contributor

Hey there,

Sorry, it doesn't look like this is available via the API. We can't interact with GKE via gcloud, and it seems like gcloud is the only way to use this feature right now. Tagging as upstream in case GKE adds it to the API.

@bgeesaman
Copy link
Author

bgeesaman commented Dec 18, 2018

It looks like this is the post body when using gcloud and the --log-http
flag, and I can see it in the docs: https://godoc.org/google.golang.org/api/container/v1beta1#WorkloadMetadataConfig

Does that provide the necessary info? Or does that mean it belongs in the beta provider repo instead?

$ gcloud beta container clusters create mycluster
--workload-metadata-from-node=SECURE --metadata
disable-legacy-endpoints=true --log-http --region us-east4
...snip...
{
  "cluster": {
    "name": "mycluster",
    "nodePools": [
      {
        "config": {
          "metadata": {
            "disable-legacy-endpoints": "true"
          },
          "oauthScopes": [
            "https://www.googleapis.com/auth/devstorage.read_only",
            "https://www.googleapis.com/auth/logging.write",
            "https://www.googleapis.com/auth/monitoring",
            "https://www.googleapis.com/auth/service.management.readonly",
            "https://www.googleapis.com/auth/servicecontrol",
            "https://www.googleapis.com/auth/trace.append"
          ],
          "workloadMetadataConfig": {
            "nodeMetadata": "SECURE"
          }
        },
        "initialNodeCount": 3,
        "management": {
          "autoRepair": true
        },
        "name": "default-pool"
      }
    ]
  }
}
...snip...

@nat-henderson
Copy link
Contributor

woah, --log-http is a gamechanger for me. Thanks so much!

@samhagan
Copy link
Contributor

samhagan commented Jan 10, 2019

based on @bgeesaman output, I added the following to the node_config:

node_config {
    metadata {
      "disable-legacy-endpoints" = "true"
    }
}

It works expected per the tests provided by the docs

@lucazz
Copy link

lucazz commented Mar 14, 2019

can confirm that the behavior is still the same. Even after adding the block pointed out by @samhagan, I coudn't get states to match.

@samhagan
Copy link
Contributor

@lucazz I double checked on two node pools - 1x with the metadata field set and 1x without. The one with the field set gets the expected result from the checks below. Whereas the one without the metadata set is able to access legacy endpoints. How were you verifying it?

curl -H 'Metadata-Flavor: Google' \
'http://metadata.google.internal/computeMetadata/v1/instance/attributes/disable-legacy-endpoints'

curl 'http://metadata.google.internal/computeMetadata/v1beta1/instance/id'

@lucazz
Copy link

lucazz commented Mar 14, 2019

I've added this block to the gke control plane terraform module we wrote, ran it, it spun un a cluster and when I issued again a terraform plan it would want to recreate the cluster because of that particular parameter that has changed. Doing so rebuilds the cluster and all the subsequent runs also try to rebuild the cluster.

@lucazz
Copy link

lucazz commented Mar 14, 2019

Rolling back to 1.11 prevents this cycle from ever occurring

@rileykarson
Copy link
Collaborator

rileykarson commented Mar 14, 2019

@lucazz do you mind sharing the provider version you're using, a minimal config, and the plan results? I think GoogleCloudPlatform/magic-modules#1507 solves this, but I'd like to confirm.

@rileykarson rileykarson self-assigned this Mar 14, 2019
@lucazz
Copy link

lucazz commented Mar 14, 2019

Sure thing:

Terraform v0.11.11
+ provider.google v2.2.0
+ provider.google-beta v2.2.0

Essentially the module I have does a little bit of provisioning (Service Accounts, Passwords, Compute Addresses for ingress controllers) and this is how we're building our control plane

resource "google_container_cluster" "this" {
  provider                          = "google-beta"
  project                           = "${var.project_id}"
  name                              = "${var.cluster_name}"
  region                            = "${var.cluster_region}"
  network                           = "${var.vpc_network}"
  subnetwork                        = "${var.vpc_subnetwork}"
  min_master_version                = "${var.kubernetes_version}"
  logging_service                   = "${var.logging_service}"
  monitoring_service                = "${var.monitoring_service}"
  timeouts                          = "${var.timeouts}"
  remove_default_node_pool          = "${var.remove_default_node_pool}"
  initial_node_count                = "${var.initial_node_count}"
  master_authorized_networks_config = "${var.allowed_cidr_blocks}"
  enable_legacy_abac                = "${var.enable_legacy_abac}"

  master_auth {
    username = "admin"
    password = " ${random_string.password.result}"
  }

  private_cluster_config {
    enable_private_nodes   = "${var.private_cluster}"
    master_ipv4_cidr_block = "${var.master_cidr}"
  }

  addons_config {
    horizontal_pod_autoscaling {
      disabled = "${var.horizontal_pod_autoscaling ? 0 : 1}"
    }
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = "pods-range"
    services_secondary_range_name = "services-range"
  }

  maintenance_policy {
    daily_maintenance_window = ["${var.maintenance_window}"]
  }

  node_config {
    metadata {
        "disable-legacy-endpoints" = "true"
    }
    service_account = "${google_service_account.gke_service_account.email}"

    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
}

Node pools are much more simpler:

resource "google_container_node_pool" "this" {
  name               = "${var.node_pool_name}"
  region             = "${var.cluster_region}"
  cluster            = "${var.cluster}"
  project            = "${var.project_id}"
  initial_node_count = "${var.initial_node_pool_count}"
  autoscaling        = ["${var.node_pool_autoscaling_settings}"]
  management         = ["${var.node_pool_management_settings}"]

  node_config {
    preemptible     = "${var.node_pool_preemptible_flag}"
    machine_type    = "${var.node_pool_machine_type}"
    service_account = "${data.google_service_account.primary.email}"
    disk_size_gb    = "${var.node_pool_boot_disk_size}"
    disk_type       = "${var.node_pool_boot_disk_type}"
    oauth_scopes    = ["${var.node_pool_oauth_scopes}"]
  }
}

@lucazz
Copy link

lucazz commented Mar 14, 2019

And the all plans would look something like this:

-/+ module.development-control-plane.google_container_cluster.primary (new resource required)
      id:                                                                      "development" => <computed> (forces new resource)
      additional_zones.#:                                                      "3" => <computed>
      addons_config.#:                                                         "1" => "1"
      addons_config.0.cloudrun_config.#:                                       "0" => <computed>
      addons_config.0.horizontal_pod_autoscaling.#:                            "1" => "1"
      addons_config.0.horizontal_pod_autoscaling.0.disabled:                   "false" => "false"
      addons_config.0.http_load_balancing.#:                                   "0" => <computed>
      addons_config.0.istio_config.#:                                          "0" => <computed>
      addons_config.0.kubernetes_dashboard.#:                                  "1" => <computed>
      addons_config.0.network_policy_config.#:                                 "1" => <computed>
      cluster_autoscaling.#:                                                   "1" => <computed>
      cluster_ipv4_cidr:                                                       "10.0.64.0/18" => <computed>
      default_max_pods_per_node:                                               "110" => <computed>
      enable_binary_authorization:                                             "false" => "false"
      enable_kubernetes_alpha:                                                 "false" => "false"
      enable_legacy_abac:                                                      "false" => "true"
      enable_tpu:                                                              "false" => "false"
      endpoint:                                                                "34.73.182.58" => <computed>
      initial_node_count:                                                      "1" => "1"
      instance_group_urls.#:                                                   "3" => <computed>
      ip_allocation_policy.#:                                                  "1" => "1"
      ip_allocation_policy.0.cluster_ipv4_cidr_block:                          "10.0.64.0/18" => <computed>
      ip_allocation_policy.0.cluster_secondary_range_name:                     "pods-range" => "pods-range"
      ip_allocation_policy.0.services_ipv4_cidr_block:                         "10.0.128.0/20" => <computed>
      ip_allocation_policy.0.services_secondary_range_name:                    "services-range" => "services-range"
      ip_allocation_policy.0.use_ip_aliases:                                   "true" => "true"
      logging_service:                                                         "logging.googleapis.com/kubernetes" => "logging.googleapis.com/kubernetes"
      maintenance_policy.#:                                                    "1" => "1"
      maintenance_policy.0.daily_maintenance_window.#:                         "1" => "1"
      maintenance_policy.0.daily_maintenance_window.0.duration:                "PT4H0M0S" => <computed>
      maintenance_policy.0.daily_maintenance_window.0.start_time:              "08:00" => "08:00"
      master_auth.#:                                                           "1" => <computed>
      master_authorized_networks_config.#:                                     "1" => "1"
      master_authorized_networks_config.0.cidr_blocks.#:                       "12" => "9"
      master_authorized_networks_config.0.cidr_blocks.1043589421.cidr_block:   "18.213.176.41/32" => "18.213.176.41/32"
      master_authorized_networks_config.0.cidr_blocks.1043589421.display_name: "codefresh-api-3" => "codefresh-api-3"
      master_authorized_networks_config.0.cidr_blocks.1084578564.cidr_block:   "147.234.99.188/32" => "147.234.99.188/32"
      master_authorized_networks_config.0.cidr_blocks.1084578564.display_name: "codefresh-api-6" => "codefresh-api-6"
      master_authorized_networks_config.0.cidr_blocks.1332616489.cidr_block:   "104.155.130.126/32" => "104.155.130.126/32"
      master_authorized_networks_config.0.cidr_blocks.1332616489.display_name: "codefresh-api-5" => "codefresh-api-5"
      master_authorized_networks_config.0.cidr_blocks.151563438.cidr_block:    "181.222.156.215/32" => ""
      master_authorized_networks_config.0.cidr_blocks.151563438.display_name:  "lucas" => ""
      master_authorized_networks_config.0.cidr_blocks.1549090175.cidr_block:   "35.192.44.48/32" => "35.192.44.48/32"
      master_authorized_networks_config.0.cidr_blocks.1549090175.display_name: "codefresh-api-0" => "codefresh-api-0"
      master_authorized_networks_config.0.cidr_blocks.1857980394.cidr_block:   "24.218.157.244/32" => ""
      master_authorized_networks_config.0.cidr_blocks.1857980394.display_name: "cole" => ""
      master_authorized_networks_config.0.cidr_blocks.2015702880.cidr_block:   "168.61.212.0/24" => ""
      master_authorized_networks_config.0.cidr_blocks.2015702880.display_name: "test" => ""
      master_authorized_networks_config.0.cidr_blocks.2322288652.cidr_block:   "104.154.63.253/32" => "104.154.63.253/32"
      master_authorized_networks_config.0.cidr_blocks.2322288652.display_name: "codefresh-api-1" => "codefresh-api-1"
      master_authorized_networks_config.0.cidr_blocks.2335865906.cidr_block:   "13.59.201.170/32" => "13.59.201.170/32"
      master_authorized_networks_config.0.cidr_blocks.2335865906.display_name: "codefresh-api-4" => "codefresh-api-4"
      master_authorized_networks_config.0.cidr_blocks.3531888586.cidr_block:   "146.148.100.14/32" => "146.148.100.14/32"
      master_authorized_networks_config.0.cidr_blocks.3531888586.display_name: "codefresh-api-8" => "codefresh-api-8"
      master_authorized_networks_config.0.cidr_blocks.4217467190.cidr_block:   "104.154.99.188/32" => "104.154.99.188/32"
      master_authorized_networks_config.0.cidr_blocks.4217467190.display_name: "codefresh-api-7" => "codefresh-api-7"
      master_authorized_networks_config.0.cidr_blocks.919241553.cidr_block:    "104.197.160.122/32" => "104.197.160.122/32"
      master_authorized_networks_config.0.cidr_blocks.919241553.display_name:  "codefresh-api-2" => "codefresh-api-2"
      master_ipv4_cidr_block:                                                  "" => <computed>
      master_version:                                                          "1.12.5-gke.10" => <computed>
      min_master_version:                                                      "1.12.5-gke.10" => "1.12.5-gke.10"
      monitoring_service:                                                      "monitoring.googleapis.com/kubernetes" => "monitoring.googleapis.com/kubernetes"
      name:                                                                    "development" => "development"
      network:                                                                 "projects/development-aae6/global/networks/development" => "https://www.googleapis.com/compute/v1/projects/development-aae6/global/networks/development"
      network_policy.#:                                                        "1" => <computed>
      node_config.#:                                                           "1" => "1"
      node_config.0.disk_size_gb:                                              "128" => <computed>
      node_config.0.disk_type:                                                 "pd-ssd" => <computed>
      node_config.0.guest_accelerator.#:                                       "0" => <computed>
      node_config.0.image_type:                                                "COS" => <computed>
      node_config.0.local_ssd_count:                                           "0" => <computed>
      node_config.0.machine_type:                                              "n1-standard-1" => <computed>
      node_config.0.metadata.%:                                                "1" => "0" (forces new resource)
      node_config.0.metadata.disable-legacy-endpoints:                         "true" => "" (forces new resource)
      node_config.0.oauth_scopes.#:                                            "4" => "4"
      node_config.0.oauth_scopes.1277378754:                                   "https://www.googleapis.com/auth/monitoring" => "https://www.googleapis.com/auth/monitoring"
      node_config.0.oauth_scopes.1632638332:                                   "https://www.googleapis.com/auth/devstorage.read_only" => "https://www.googleapis.com/auth/devstorage.read_only"
      node_config.0.oauth_scopes.172152165:                                    "https://www.googleapis.com/auth/logging.write" => "https://www.googleapis.com/auth/logging.write"
      node_config.0.oauth_scopes.299962681:                                    "https://www.googleapis.com/auth/compute" => "https://www.googleapis.com/auth/compute"
      node_config.0.preemptible:                                               "false" => "false"
      node_config.0.service_account:                                           "dev-gke@development-aae6.iam.gserviceaccount.com" => "dev-gke@development-aae6.iam.gserviceaccount.com"
      node_pool.#:                                                             "1" => <computed>
      node_version:                                                            "1.12.5-gke.10" => <computed>
      private_cluster:                                                         "" => <computed>
      private_cluster_config.#:                                                "1" => "1"
      private_cluster_config.0.enable_private_nodes:                           "true" => "true"
      private_cluster_config.0.master_ipv4_cidr_block:                         "10.0.144.0/28" => "10.0.144.0/28"
      private_cluster_config.0.private_endpoint:                               "10.0.144.2" => <computed>
      private_cluster_config.0.public_endpoint:                                "34.73.182.58" => <computed>
      project:                                                                 "development-aae6" => "development-aae6"
      region:                                                                  "us-east1" => "us-east1"
      remove_default_node_pool:                                                "true" => "true"
      subnetwork:                                                              "projects/development-aae6/regions/us-east1/subnetworks/development" => "https://www.googleapis.com/compute/v1/projects/development-aae6/regions/us-east1/subnetworks/development"
      tpu_ipv4_cidr_block:                                                     "" => <computed>
      zone:                                                                    "us-east1" => <computed>
-/+ module.development-node-pool.google_container_node_pool.primary (new resource required)
      id:                                                                      "us-east1/development/node-pool" => <computed> (forces new resource)
      autoscaling.#:                                                           "1" => "1"
      autoscaling.0.max_node_count:                                            "2" => "2"
      autoscaling.0.min_node_count:                                            "2" => "2"
      cluster:                                                                 "development" => "development"
      initial_node_count:                                                      "2" => "2"
      instance_group_urls.#:                                                   "3" => <computed>
      management.#:                                                            "1" => "1"
      management.0.auto_repair:                                                "true" => "true"
      management.0.auto_upgrade:                                               "true" => "true"
      max_pods_per_node:                                                       "" => <computed>
      name:                                                                    "node-pool" => "node-pool"
      name_prefix:                                                             "" => <computed>
      node_config.#:                                                           "1" => "1"
      node_config.0.disk_size_gb:                                              "128" => "128"
      node_config.0.disk_type:                                                 "pd-ssd" => "pd-ssd"
      node_config.0.guest_accelerator.#:                                       "0" => <computed>
      node_config.0.image_type:                                                "COS" => <computed>
      node_config.0.local_ssd_count:                                           "0" => <computed>
      node_config.0.machine_type:                                              "n1-standard-1" => "n1-standard-1"
      node_config.0.metadata.%:                                                "1" => "0" (forces new resource)
      node_config.0.metadata.disable-legacy-endpoints:                         "true" => "" (forces new resource)
      node_config.0.oauth_scopes.#:                                            "4" => "4"
      node_config.0.oauth_scopes.1277378754:                                   "https://www.googleapis.com/auth/monitoring" => "https://www.googleapis.com/auth/monitoring"
      node_config.0.oauth_scopes.1632638332:                                   "https://www.googleapis.com/auth/devstorage.read_only" => "https://www.googleapis.com/auth/devstorage.read_only"
      node_config.0.oauth_scopes.172152165:                                    "https://www.googleapis.com/auth/logging.write" => "https://www.googleapis.com/auth/logging.write"
      node_config.0.oauth_scopes.299962681:                                    "https://www.googleapis.com/auth/compute" => "https://www.googleapis.com/auth/compute"
      node_config.0.preemptible:                                               "false" => "false"
      node_config.0.service_account:                                           "dev-gke@development-aae6.iam.gserviceaccount.com" => "dev-gke@development-aae6.iam.gserviceaccount.com"
      node_count:                                                              "2" => <computed>
      project:                                                                 "development-aae6" => "development-aae6"
      region:                                                                  "us-east1" => "us-east1"
      version:                                                                 "1.12.5-gke.10" => <computed>
      zone:                                                                    "" => <computed>
Plan: 2 to add, 0 to change, 2 to destroy.
------------------------------------------------------------------------
[...]

@rileykarson
Copy link
Collaborator

Can you confirm that that plan came from the exact google_container_cluster config above? It shouldn't have that diff, but the node pool should w/o this in node_config:

    metadata {
      disable-legacy-endpoints = "true"
    }

Regardless, GoogleCloudPlatform/magic-modules#1507 should resolve the issue.

@lucazz
Copy link

lucazz commented Mar 14, 2019

Now that you mention it it looks indeed kinda off. I think this is an older iteration of the issue.
I can run it again if you'd like.
When will that change come out in beta?

@rileykarson
Copy link
Collaborator

That change will be applied to both providers at the same time, in 2.3.0. That should hit in < 2 weeks.

@rileykarson
Copy link
Collaborator

& yeah, if you don't mind running again. node_configs with the metadata explicitly defined should see no diff, and clusters pre1.12 shouldn't have one either.

@lucazz
Copy link

lucazz commented Mar 14, 2019

kk, let me give it a try

@rileykarson
Copy link
Collaborator

This is resolved by GoogleCloudPlatform/magic-modules#1507 if you're still experiencing issues on a release after 2.3.0, please let me know!

@ghost
Copy link

ghost commented May 26, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators May 26, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants