-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Authenticated/Authorized access to kubelet API #89
Comments
cc @kubernetes/sig-auth |
Just noting approval for the feature. |
@philips yes, I'll be doing the work |
Just wondering, will this be in 1.5? Trying to determine if our team needs to invest the time into securing these communications via SSH and firewall rules to prevent https://github.com/kayrus/kubelet-exploit, or if we can just hold off for a bit and utilize our existing TLS client certs once this lands. |
Yes, we are targeting 1.5. |
@deads2k awesome, thanks!! |
Automatic merge from submit-queue Proposal: kubelet authentication/authorization Proposal for kubernetes/enhancements#89
/cc @kubernetes/huawei |
Automatic merge from submit-queue kubelet authn/authz Implements https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/kubelet-auth.md Part of [Authenticated/Authorized access to kubelet API](kubernetes/enhancements#89) feature
Automatic merge from submit-queue Cleanup auth logging, allow starting secured kubelet in local-up-cluster.sh Cleanup for kubernetes/enhancements#89
Does this feature target alpha for 1.5? |
@idvoretskyi at least alpha @erictune @liggitt This mirrors the mechanism we've used in OpenShift since 1.0. Want to call it beta? |
I'd be comfortable with beta. It is built on top of two beta APIs, and has been tested. The remaining work is automated load testing and default enablement in the various install/setup methods |
No objection to beta. Ask node team too? On Thu, Nov 17, 2016 at 5:29 AM, Jordan Liggitt notifications@github.com
|
I am a lead on @kubernetes/sig-node and agree this should be beta. |
Automatic merge from submit-queue Proposal: kubelet authentication/authorization Proposal for kubernetes/enhancements#89
I'm removing the team/SIG-Node label given that SIG-Auth is listed as the owner in the 1.5 feature spreadsheet; if we're using labels to describe areas of overlap then we need something else in these issues to identify owner |
@liggitt can you confirm that this item targets stable in 1.6? |
Yes |
keeping in beta status until the TokenReview and SubjectAccessReview APIs move to stable (now targeting 1.7) |
@liggitt @idvoretskyi moving to next milestone then |
Hi guys! So, what's the status on this? I saw some PR for docs which seems to have changed. Having a secure kubernetes cluster right out of |
That is the case. kubeadm uses the latest stable security features for pretty much every release. Regarding the other questions you have, I don't think they are very relevant to this feature. This feature is about locking down the kubelet API endpoint, not network (Pod2Pod, Node2Node) communication. @liggitt I guess we could close this. Stable in v1.6 which was released some time ago. Most providers (like kubeadm) enable this by default. |
Locking it down as in an access control applied to the level of the kubelet API endpoint itself, right? not preventing access from it, because I can still pretty much do a In no way I have considered the implications/work of such a feature given Kubernetes architecture. Simply by my book no port to connect to available is better than having it available |
The kubelet port at
That will yield an |
🤔 I was pretty sure I got a |
By the way, the documentation is ok detailing what are the concepts of Nodes, Pods and Clusters, and how they interact with each other, however, I found no reference on how these interact with the host's world, i.e. it would be nice to see some diagrams or similar on how the several services and concepts (pods, nodes, services, processes [i.e. kubectl, kubeproxy kubeadm]) interact with the host and its native network. I have been looking quite thoroughly at the docs yet I have no clear picture of how that looks like in my head. Making it clearer would lower the barrier to learn it and use it. Can anyone point me to a diagram/doc? I'm asking because I want to know exactly what should I expose or shouldn't network wise. For instance, imagine that I have an equivalent to the Amazon's VPCs, what should be exposed to the outside what doesn't? how does a service get exposed? (i.e. is there any kubernetes process for which all the traffic flows through [e.g. kubeproxy] or does it expose the pod's port directly) would it be ok to have kubectl only accessible internally (if i'm willing to do an SSH tunnel to one of the machines). Would that still allow Kubernetes to work well? These things would help sysadmins to have a clear picture if their network and cluster configuration are secure or not and confident that they are exposing the least they should |
@joantune The best reference I've found for some of the questions you asked is this video: |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
@liggitt I'm pretty sure we can close this now as many deployments secure the kubelet API OOTB, and the APIs are v1/stable already. |
agree, closing |
Clean up formatting in the enhancement
Description
The kubelet API gives access to a wide variety of resources.
Allow authenticating requests to the kubelet API using any of:
Allow authorizing requests to the kubelet API using one of:
Progress Tracker
@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
@kubernetes/feature-reviewers
on this issue to get approval before checking this offFEATURE_STATUS: IN_DEVELOPMENT
The text was updated successfully, but these errors were encountered: