Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the AWS & EKS control-plane controllers to split the kubeconfig secret into two for Cluster Autoscaler #4607

Closed
cnmcavoy opened this issue Nov 1, 2023 · 7 comments · Fixed by #4648
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@cnmcavoy
Copy link
Contributor

cnmcavoy commented Nov 1, 2023

/kind feature

Describe the solution you'd like
Currently, the AWSControlPlane and AWSManagedControlPlane controllers create and manage the cluster kubeconfig secrets ("%s-kubeconfig" in the cluster's namespace). The contents of this secret is a single key-value, with the value being the cluster's kubeconfig in the shape of:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t...
    server: https://XXXXX.us-east-2.eks.amazonaws.com
  name: cluster-name
contexts:
- context:
    cluster: cluster-3
    user: cluster-3-capi-admin
  name: cluster-3-capi-admin@cluster-3
current-context: cluster-3-capi-admin@cluster-3
kind: Config
preferences: {}
users:
- name: cluster-3-capi-admin
  user:
    token: k8s-aws-v1...

In the case of EKS clusters, the bearer token embedded in the token field expires quickly and the CAPA controller regularly updates the secret to keep it fresh. However, this shape of config is hard for other controllers, like the cluster autoscaler to consume, and leads to bugs like kubernetes/autoscaler#4784 which can not be easily resolved by those controllers. If they reload the entire config, the user or other critical content may change, if they reload only the token within, then they are disregarding the contents and making unsafe assumptions.

I have discussed this problem with the folks over in SIG API-Machinery and it was suggested to split this secret into two key-values, in this shape:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t...
    server: https://XXXXX.us-east-2.eks.amazonaws.com
  name: cluster-name
contexts:
- context:
    cluster: cluster-3
    user: cluster-3-capi-admin
  name: cluster-3-capi-admin@cluster-3
current-context: cluster-3-capi-admin@cluster-3
kind: Config
preferences: {}
users:
- name: cluster-3-capi-admin
  user:
    tokenFile: ./bearer-token

And a second key-value, "bearer-token" with the contents:

k8s-aws-v1...

Additional context: kubernetes/autoscaler#5951 (comment)

This will allow controllers like the Cluster Autoscaler to mount the secret contents to a path, and it will populate two files. The existing api machinery then should be able to handle the reloading of the 2nd bearer-token file without complications.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Changing the secret value to depend on the 2nd value existing will break any users that consume only the one secret key-value explicitly, in a subPath volume mount. Normal volume mounts should work, as both files will exist relative to each other.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 1, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cnmcavoy
Copy link
Contributor Author

cnmcavoy commented Nov 1, 2023

/assign

@cnmcavoy cnmcavoy changed the title 🌱 Refactor the AWS & EKS control-plane controllers to split the kubeconfig secret into two for Cluster Autoscaler Refactor the AWS & EKS control-plane controllers to split the kubeconfig secret into two for Cluster Autoscaler Nov 1, 2023
@mloiseleur
Copy link

\o that's great.
There is the same need and the same issue for FluxCD.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 17, 2024
@cnmcavoy
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
4 participants