-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tokens don't refresh in personal configs #2360
Comments
Hello, Nikolay! Some thoughts:
|
Hello, @nabokihms Subjectively, there are fewer problems with non-working configs, but they have not completely disappeared. I did the following experiment:
Is this the correct work of dex? I mean even with |
It seems that there is a problem with the concurrent call of the |
Hello everyone, I am also facing similar kind of issue with different error. I am generating id-token and refresh-token using dex for kube-oidc-proxy check to Kubernetes GKE server. On generating new tokens, everything works fine and I am able to perform kubectl commands which first get authenticated at kube-oidc-proxy server and then get forwarded to kubernetes API server on GKE. But once id-token gets expired, refresh-token should work and generate new id-token but running kubectl commands with expired token gives
I am using google connector and kubernetes storage. This is my configuration file:
staticClients:
|
@nabokihms can you share your solution please? |
Hello, @ansh-lehri. I think your problem differs from the one @nixon89 has. Kubectl fails to request Dex because of the TLS certificate. The first step is to check auth flags in your kubeconfig
Which certificate will you get if you try to access the Dex URL? Is it returns a valid TLS certificate? Is the |
@nabokihms it looks like we are facing the same problem as described by @nixon89, it is reproduced from time to time. Do you have any news about possible solutions? |
@sgremyachikh I tried to describe the problem in detail in the linked issue #2547. I hope when we come up with a solution for calling refresh multiple times, your situation will also be fixed. |
Preflight Checklist
Version
2.30.2
Storage Type
Kubernetes
Installation Type
Official Helm chart
Expected Behavior
When id-token expires - automatically refresh via kubectl
When refresh-token expires - automatically refresh via kubectl
Actual Behavior
Case 1:
When some users run:
kubectl get pods
Kubectl return error (500 internal server error):
In Dex logs error (400 bad request):
level=error msg="failed to refresh identity: oidc: failed to get refresh token: oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\"}"
Case 2:
When some users run:
kubectl get pods -v 7
Kubectl return error (400):
In Dex logs error:
Response: {"error":"invalid_request","error_description":"Refresh token is invalid or has already been claimed by another client."}
Steps To Reproduce
I don't know how reproduce this cases.
Users only use kubectl/k9s.
kubectl/k9s via oidc may brake after 1,2,4 days, but may brakes after 45 days.
k8s APIServer options:
our setup: k8s-apiserver + dex (in oidc mode) + oidc.example.com (our custom oidc server) + gangway (for present kubeconfigs to users) + oauth2-proxy/k8s-dashboard.
We hasn't problem in tandem: dex + oauth2-proxy/k8s-dashboard.
Problems only with personal kubeconfigs, that uses in kubectl/k9s.
Additional Information
Kubernetes v1.19.7
Dex 2.30.2
heptio/gangway 3.3.0
kubectl 1.19..1.22
Configuration
Logs
No response
The text was updated successfully, but these errors were encountered: