-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to set up pod network because token expired due to bound service account token #852
Comments
Also, I notice there was an attempt to refresh the token, https://github.com/k8snetworkplumbingwg/multus-cni/pull/686/files, but that was never completed/merged-to-master. Wondering is there any plan to fix the issue, now that there is a need to refresh the token or else Multus can potentially fail after an hour of running when service-account-extend-token-expiration is set to false. Thanks. |
aws/amazon-vpc-cni-k8s#1868 (comment) We've also experienced this and it caused much confusion for a while. |
Yeah, this is definitely an issue that can occur! Let me try to ressurect #686 and see if I can get that in there. In the feature/multus-4.0 branch, we have the "thick plugin" architecture which will account for this with the in-pod kube auth, but we should have it work in the current version when certs are rotated. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
@dougbtv - Any update on which version has this fix? This is also affecting whereabouts. |
Bump... any ideas why this PR was closed? #686 |
Yeah, it should be open, it just went auto stale. |
Also, for what it's worth, I think with the thick plugin architecture, it shouldn't be such a big deal, that is, it should be using the service account token in the pod, and not a generated kubeconfig that resides on disk, so... It should just use the updated token. |
has anybody tried Thick plugin architecture in EKS already ? |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
Bumping this again since it seems like it went stale. Any updates? |
For EKS, thick-plugin is now available, so far the token issue seems to be resolved using it. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
What happend:
Pod stuck at ContainerCreating and following unauthorized error is shown when describing it:
What you expected to happen:
Multus set up the pod network successfully and pod can run.
Anything else we need to know?:
Bound service account token is turned on by default in K8s 1.21: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md
In previous versions of K8s, service account token doesn't has an expiration.
In K8s 1.21, the token expires after 1 year or expires after 1 hour if service-account-extend-token-expiration is false.
Looking at Multus source code, I think the token will never be updated after the Multus pod finished its initial setup in entrypoint.sh.
Restarting the multus pod will fix the problem, because a new token will be used.
How to reproduce it (as minimally and precisely as possible):
In a K8s with API server argument service-account-extend-token-expiration set to false.
After multus pod has been running for an hour, create a new pod, and the pod will stuck at ContainerCreating.
Environment:
kubectl version
): 1.21.5The text was updated successfully, but these errors were encountered: