Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If persistHome is enabled, the token in .kube/config isn't renewed #22924

Closed
batleforc opened this issue Apr 16, 2024 · 12 comments · Fixed by eclipse-che/che-dashboard#1141
Closed
Labels
kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. sprint/next

Comments

@batleforc
Copy link

batleforc commented Apr 16, 2024

Describe the bug

Hello,
I have setup two kinds of env, one based on the udi and one that I build. With both image and the persistHome option enabled, I end up with a Kubeconfig with outdated token after 12 hours (the liveness setup in the IDP).

This bug has been found on Kubernetes (K3s,MicroK8s,kubeadm) and will be tested on OpenShift.

Fixed by deleting the /home/user/.kube folder and restarting the workspace

Che version

7.84@latest

Steps to reproduce

  1. Setup an eclipse che with the persistHome option on true (i have the bug with either PerUser storage and PerWorkspace)
  2. Start a workspace
  3. Wait for the time needed for your token to not be valid any more
  4. Type kubectl get pod
  5. Enjoy the error.

Expected behavior

Well, i expect my token to be renewed each time i start a WorkSpace

Runtime

Kubernetes (vanilla)

Screenshots

image

Installation method

chectl/latest, chectl/next

Environment

Windows, Linux

Eclipse Che Logs

No response

Additional context

No response

@batleforc batleforc added the kind/bug Outline of a bug - must adhere to the bug report template. label Apr 16, 2024
@AObuchow
Copy link

@batleforc thanks for reporting. I believe this is a Che-Dashboard issue, as the Dashboard's backend is responsible for injecting the kube config into the workspace pod, however, I believe this injection only happens if the kubeconfig file doesn't exist in the pods filesystem. When persistUserHome is enabled, the kubeconfig file will persist on the PVC and thus will persist.

The required fix would probably be to re-create the kubeconfig file on workspace startup if a certain amount of time has passed since the workspace was last started (I'm not sure if we can actually track this). Or, to just always re-inject/overwrite the kubeconfig file on workspace startup.

@batleforc
Copy link
Author

If there is no other kubeconfig mounted through a secret/configmap, wouldn't checking if the file matches a possible template checking if the token work and if not update it ?

@AObuchow
Copy link

If there is no other kubeconfig mounted through a secret/configmap, wouldn't checking if the file matches a possible template checking if the token work and if not update it ?

That seems like a much better idea than my suggestions, +1 :)

@batleforc
Copy link
Author

I forgot to include, but the problem has been reproduced in the latest version of DevSpaces on OpenShift

@AObuchow AObuchow added the severity/P1 Has a major impact to usage or development of the system. label May 17, 2024
@batleforc
Copy link
Author

Hello,
Do you have any news on this issue ?

@AObuchow
Copy link

AObuchow commented Jul 2, 2024

@batleforc no update so far, unfortunately.

@ibuziuk maybe something for the next sprint for team A?

@batleforc
Copy link
Author

Is it possible to check if it's okay for you. I kind of need this fixed :/

@AObuchow
Copy link

AObuchow commented Jul 4, 2024

@batleforc Thank you for submitting a PR for this :) I've pinged members of the team responsible for the Che Dashboard to take a look at your PR.

For testing your PR, it's worth checking whether the liveness setup in the IDP can be modified to be less than 12 hours (so that the reviewers don't have to wait as long).

@batleforc
Copy link
Author

In the env I tested, the idp liveness was set to 6 / 8 / 12h, don't know how I can help further.
The IDP used was Zitadel.

@batleforc
Copy link
Author

And i force logout my user too

@AObuchow
Copy link

AObuchow commented Jul 4, 2024

@batleforc thanks for the info 🙏🏻
There's a time difference with the members of the team responsible for the Che Dashboard, so they will probably take a look at your PR starting tomorrow.

@ibuziuk
Copy link
Member

ibuziuk commented Jul 24, 2024

@batleforc Thank you for the contribution, the fix should be part of the 7.89.0 - https://twitter.com/eclipse_che/status/1816081779607928954 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. sprint/next
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants