-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Authentication and Authorization support for Tiller #1918
Comments
I very much like this proposal. I would imagine doing this as four separate PRs:
Okay, a few other comments:
|
I think supporting it via a service in Kube will be better because, users don't need to worry about one more extra config.
I think there is a "security" benefit of using TPR. As things work today, it is possible for someone to overwrite coinfigmaps and then rollback to deploy a bad version. With TPR, cluster admins can restrict regular users from writing Tiller TPR but allow them to edit configmaps. But these can be done as step 2.
One thing, that is not discussed so far is how to store the user info. I was thinking about storing that as a Label in the configmap and them display it. This will also require adding username field to proto. |
Just want to record that I have been getting a lot of supportive feedback in Slack and in other places about this particular feature as defined here. This is something a wide variety of users are asking for in the 2.3.0 release. |
I've added this to the 2.3 milestone. @tamalsaha I am assuming this is a feature you are planning on leading. |
Are you expecting tiller to have a different set of client certificates than the kubernetes master? Or the same? In this case tiller is acting as an authenticating proxy - would be good to make sure we have a checklist for the sorts of things that are "safe" and are audited. Such as never storing the token, never writing it to logs, etc. Some of the bearer tokens will be very powerful (cluster-admin or highly privileged users). |
If you have the user's token, why do you need to do SubjectAccessReview for mutations? The Kube APIs will already do that. @liggitt re: use of impersonation or not. |
I am expecting them to be same.
Sounds great!
Calling |
Assigned this to @tamalsaha as the feature owner. |
Not necessarily, there are plenty of other reasons requests could fail. You need to handle partial success already. |
Yes. Even the diff computation can fail. Is there any technical reason to avoid SubjectReview calls if possible (say, too much traffic for apiserver, etc.)? |
can't assume the login name is identical to the authenticated username. some auth systems allow multiple logins (bob, BOB, bob@domain) to map to a single username. Also, to impersonate, you need to obtain and impersonate all the user info: username, group membership, and "extra" data (could hold scope info, other info used by the authorizer, etc). I'm not sure how you plan to obtain that info from basic auth credentials.
is this proposing sending the content of the private key to the server so the server can replay as the user?
Is this referring to storing the resolved user information (username, groups, extra data, etc) or the user credentials themselves? User info (especially if used for replay) should only be stored in a way that guarantees its integrity. Some ways to do that are:
I think user info stored in mutable API fields is likely to be used without proper verification, allowing someone to modify it and perform escalation attacks.
All aspects of the user info - username, groups, and "extra" fields (see the SubjectAccessReview API) |
Great feedback, @liggitt ! The wip pr is here: https://github.com/kubernetes/helm/pull/1932/files#diff-8f4541a8ec0fb2b3143ca168f7083f17R171
This is new information to me. How does Kube api server handle it?
Yes. Is there a better way when users are using client cert auth?
The user info is not used for replay. This is just information stored to later display who performed a release. But data integrity is still important. We can use HMAC, etc to verify that the username has not changed. But what to do if someone changes it? I don't see how can we stop that from happening. Currently everything is stored inside configmaps. We have considered storing it in a Helm specific TPR. But either way someone can can change the underlying data by calling the Kube api server (assuming they have authorization).
Yes. We are keeping everything related to UserInfo, so that it can be sent to SubjectReview. |
The ways the kube api server accepts basic auth info:
Tiller would need to resolve user info from basic auth credentials using the same mechanisms, I suppose.
Request and verify a presented client certificate using the same CA as the API server and extract username/group info from the presented cert. It is not proper to send private key content to the server in a header.
Then I'm missing where the user credentials are stored for later replay. |
I see. I was hoping that we could avoid reimplementing the full auth process of kube api server. Is there any easier way to achieve this? Like use the kube api server auth process as a library so that we don't have to re-implement it.
I agree with you. But without sending the client private key how can we create an authenticated Kube Client using user's credential in Tiller?
So, the user info is used for the duration for a single api call to Tiller. One api call to Tiller will result in multiple calls to Kube api using user's credentials. At the end of a call, the username is stored in configmaps for record keeping purposes. This username is not reused for replay in any later api calls to Tiller. |
I had another round of conversation with @liggitt . You can find the transcript here: https://docs.google.com/document/d/19Bxu1TMEOErP81E4kUt6zIoms0_3_O5lvPhEbzxYdQ8/edit?usp=sharing Clarifications:
Token: Client Cert: Basic Auth: Auth Proxy |
This issue seems to have diverged substantially from the original proposal. I think we'll need to re-assess whether this is feasible in the 2.3.0 timeframe. Okay... I'll clear up a few details:
I lost the thread on this one part of the argument. Maybe you can clarify it. We had initially discussed proxying credentials (regardless of type) through Tiller. So Tiller would merely accept the credentials from the client per request, and add those to the Kubernetes API client (e.g. via the existing kubectl library). No part of these credentials would be written to storage. Only a user name field (if present) would be allowed to be written to logs or stored in the release record, and this is merely for bookkeeping (never replays or verifications). Now it seems that the proposal would include implementing a built-in authentication system inside of Tiller that would take responsibility for validating credentials. Why? I would rather punt all verification (including SubjectAccessReview) to the Kube API server rather than turn Tiller into an authentication/authorization verification service. So I think somewhere along the line I lost the train of the argument. |
I have put what I understood from my discussions with @liggitt in the above comment. We are still working based on the original proposal. I think we might come up with some follow up tasks based on the above comment for future versions of Kubernetes. So, I think auth should be possible in time for 2.3 release.
Great! Then there are no Kube api availability related issues as far as I see.
Agree.
Basic and Bearer auth stays mostly as-is in the original proposal (both authN / authZ parts). The new questions are in the following scenarios:
But we are also forwarding tokens ans passwords in bearer/basic auth modes. Granted that client certs are difficult to change, how much of an issue is this? This is where I think Helm team needs to give their feedback.
All authZ still handled by Kube api server. Tiller calls Kube with the authenticated client to check for these validations. |
How come proxying the SSL cert is frowned upon? Keep in mind that we already have SSL/TLS security between Helm CLI and Kube API. And can also enable HTTP/2 (with TLS) on the gRPC. |
Requiring uploading a private key is not proxying, and it compromises the confidentiality of the credential in a way that runs counter to what most PKIs would expect. We should allow each credential to be used against tiller in its natural form... for bearer tokens and basic auth, that allows tiller to replay that credential. For client certificates, it does not. |
Yeah, I suppose I agree that this really is not the intended use case for a PKI architecture. But the proposed solution seems unduly complex, and still doesn't quite solve the problem. The confusion here is that the client can authenticate to Tiller with an SSL certificate (via gRPC, though appears to be disabled at the moment. I'll check on it.). We don't need a second mechanism to achieve that. What we are after is establishing that Tiller is correctly operating against the Kubernetes API with the "identity" (or at least same authz rules) of the remote user. Ideally, the client who is configured with a client SSL cert could negotiate what amounts to a bearer token with the kube api server, and pass that token on to Tiller. Is that possible already, @liggitt ? Barring that, my inclination is to not support client-side SSL certificates in version 2.3.0. |
No, kube has declined to act as a token issuer for tokens representing end users. |
Some random/incomplete thoughts:
One possibility is to float the idea of extending the aggregation api to allow for aggregate api's that are running in a single namespace and exposed only to that namespace. It would apply to more then just helm's needs.
Another thing is to work with the k8s addon-manager folks. It would be really great if helm could just be the addon-manager. Some of their concern was around it running api wise very differently then the rest of k8s. having helm use the aggregation api would solve a lot of that. Also work with the cluster-lifecycle folks. With the self hosting work going on, it would be great for helm upgrade kube-apiserver to work. Whatever solution to these issues that works with those two teams enough that they accept it is probably greatly preferable to going it alone.
I don't like the idea of passing a token to tiller, having it validate "user can do x" and then tiller does all the actions with its own credentials. I really think if tiller's external it should just use the user's creds to do everything. That may require oauth delegations or something to work, but plugs any api authz issues that will creep up if you do validation only.
Even using the aggregate api, a tiller v2 api compatible adapter that just ends up calling to the aggregate api would still be possible. so it would be a v3 api, but not necessarily require a full helm v3 version?
Thanks,
Kevin
…________________________________
From: Matt Butcher [notifications@github.com]
Sent: Thursday, August 10, 2017 8:47 AM
To: kubernetes/helm
Cc: Fox, Kevin M; Manual
Subject: Re: [kubernetes/helm] Proposal: Authentication and Authorization support for Tiller (#1918)
Okay, so here's the situation:
* Our initial attempts to provide this feature have stumbled because there is no way currently for Kubernetes to validate the identity of a user account for an in-cluster service
* We attempted to work around this by forwarding credentials and having tiller authN on behalf of the client, but that is inherently insecure for most forms of auth
So I spent the last week discussing this with a variety of people. At this point, we have three options:
1. Re-implement Tiller as an aggregated API service.
2. Participate in the Container Identity WG and follow the outcome of their proposal.
3. Implement an Oauth2-like service as an aggregated API and use that.
All of these seem to be technically feasible, but each has pros and cons.
Re-Implement Tiller as Aggregated API Service
The pro for this one is that it deeply embeds Tiller into the Kubernetes API surface. We get AuthN via the Kubernetes API server.
But the cons are:
* Aggregated API is still in development and has not hit the v1 milestone yet
* Some modes of operating Tiller (e.g. namespace-constrained) will be gone
* Tiller will always have to operate as a cluster admin user
* There are some edge cases where misconfigured cluster can undermine auth
* The entire API surface of Tiller will be rewritten, which means this is a huge breaking change and would require Helm 3.0
Container Identity WG
There is now an official Kubernetes working group to discuss this (and related) issues regarding establishing identity inside of a cluster. The first meeting was a few days ago.
Pros:
* The approach this WG devises will be implemented in Kubernetes
* The 50+ person group has some serious expertise
* We will be participating in a broad solution to the problem, rather than a one-off solution
Cons:
* This will take time
* It is possible that the outcome still would not meet our needs
Implement OAuth2-like Gateway
A third approach would be to use the Aggregated API to create a token validation service like OAuth/OIDC. This service would take the AuthN validation asserted by the Kubernetes API service and issue/manage tokens that Tiller can use in-cluster to validate that a user is who they assert to be.
Pros:
* This is a stand-alone service that Tiller would integrate with
* We can preserve backward compatibility to some extent, and enable this setup through configuration
* It can probably be done in a reasonable timeframe
Cons:
* Aggregated API is not stable
* There are some edge cases where misconfigured cluster can undermine auth
* It is not 100% clear how Tiller will validate its connection to the gateway
* To some extent, we have to solve for the case that this can't quite be a pure OAuth/OIDC workflow (because AuthN is already handled by Kube API. We just issue and validate tokens). There may be security issues lurking there.
In any event, our earlier method of proxying auth is not viable in the short term. These our our current three "best ideas" for moving forward.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#1918 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABcWwi2PF8wVdalJCzha0qFkwCPfTkujks5sWyYEgaJpZM4L4GOY>.
|
@technosophos, I can't tell whether any of the options considered involved using impersonation. We've discussed why that approach is challenging, and it's helpful to spell out why it's not an ideal solution to pursue. Suppose that Tiller could just reuse all of the authentication machinery present in the Kubernetes API server. If it could, it could authenticate the client as a given user (such as the person invoking the helm client), and turn around and impersonate that user—together with his group membership and "extra" authentication details—as a Tiller-specific service account. That service account would need to be authorized to impersonate that user. Under such impersonation, Tiller could do no more than the impersonated user would be able to do. The downsides to this approach are as follows:
I apologize if this is restating one of the options above, just using different terms. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
@seh @technosophos Hi! Although it ended up as an unideal solution, please let me 👍 to the one involving impersonation, and highlight this part of the discussion:
I'm not really clear about that but isn't it apiserver but not tiller who calls out to a webhook token authnz server for a token review? If that's the case, I suppose what Tiller would do given an user's brearer token is only to populate ImpersonateConfig which is then passed to kubernetes/client-go for impersonation? Anyway, the webook token review thing would be beneficial for me. The context is that I'm relying on heptio/authenticator which is a webhook token authentication server who validates a user-provided bearer-token with AWS IAM, so that any IAM user with an appropriate IAM role can authenticate against K8S API. If Tiller supported impersonation with an user-provided bearer token (perhaps via something like Thanks! Edit 1: Sorry but I had misunderstood how impersonation in k8s works. For impersonation, we have to allow Tiller to impersonate any user/group/etc w/ RBAC. That would be too much permission plus it doesn't involve the user's bearer-token at all :) "Impersonation" in this case should be done by Tiller just replays an user-provided bearer token, instead of utilizing k8s's impersonation mechanism. Edit 2: RBAC does seem to allow restricting who can impersonate as who: |
OT: Interesting @mumoshu! There are quite few moving parts to configuring heptio/authenticator, are you able to deploy it with just a |
Hi @whereisaaron Yes, it is definitely possible. I'll share it once ready 👍 |
For anyone interested, I'm working on my POC to delegate authn/authz w/ k8s RBAC via tokens to kube-rbac-proxy.
How it works
Note that you MUST protect your tiller container from anything other than accesses from the kube-rbac-proxy sidecar for now. kubectl-exec into the tiller should be prohibited, kubectl-port-forward to the tiller grpc port should be prohibited, etc. If I could manage to add tls support for the kube-rbac-proxy mode, perhaps all of the above operations but kubectl-exec could be permitted? I'm not really an expert in this area but am willing to learn. Please let me know anything you have in mind about this! Thanks. |
I've put some time to investigate possibility to support impersonation for tiller via the I had wanted This results in following changes to current code-base. Would it be an acceptable change to helm/tiller?
TBD;
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale This is still an important feature that should be considered for 3.x if an implementation can be identified. It would be good to get feedback on @mumoshu's excellent work on this. |
/remove-lifecycle rotten |
Is there something I could do to make this happen? |
@docwhat there has been discussion in the community around this topic, and the general consensus is to close this in favour of the Helm 3 proposal around its Security Considerations. We plan on moving Helm towards a client-only model, which means that a Given that the general consensus is to move forward with the proposal in https://github.com/kubernetes-helm/community, I feel this is safe to close out this one as superseded by the Helm 3 proposal. |
Goal
For Tiller to execute all of its API operations "as" the user. So that (a) Tiller is not a "superuser" and (b) users' own ids, etc. are attached to operations.
Architecture
The general idea is Helm cli will forward user’s auth info from kubeconfig to Tiller gRPC server via HTTP Header. Tiller gRPC server will call various Kubernetes authN/Z apis to perform authentication and authorization in an intercepter before applying mutation. The following diagram shows the flow:
Kube Client Authentication
The TokenReview api call will be made using default service account. This is required, since if RBAC authorization is enabled in the cluster, user may not have permission to perform TokenRview api.
All other kube api calls (including SubjectReview api) will be made using user’s auth info.
User info
Anonymous requests
From Kubernetes docs, "When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated." I propose that Tiller does not support anonymous requests.
Choice of Header Keys
The following keys (case-sensitive) will be used in golang
context
to forward auth info from kubeconfig to gRPC server:HTTP protocol itself has no limit on header length. So, forwarding the above fields should be ok.
Discovering Tiller Server
We would like to support running Tiller in Standalone mode or Hosted mode. In Standalone mode, Tiller manages the cluster it is running on. This is the current setup. In Hosted mode, Tiller will be run outside the cluster that it manages. To support auto discovery of Tiller server address, we propose the following change:
If
tillerHost
is empty, first search for a service named “{"app": "helm", "name": "tiller"}” in default tiller namespace (kube-system). If found, use the externalName in the service to connect to an externally Hosted Tiller server. If not found, then use the existing port forwarding mechanism to connect to cluster.Misc Considerations
JSON API
If users are interested in accessing the Tiller api from a web page, they have to run gRPC Gateway to provide a JSON api.
Release Authorization
An additional layer of authorization may be added to Tiller gRPC server using a Third Party Resource called
Release
. Cluster admins can configure RBAC authorization for the Release resource. Tiller api server will perform an extra SubjectReview call to check if the user has permission to release a chart:This can be added later, once the initial above proposal is implemented.
Storing Release Data
Standalone mode: Currently Tiller stores data in configmaps. I think we should store release data in a TPR to avoid confusion. The
Release
object used for authorization can also be used for storing release data. This is also inline with discussions on api extension.Hosted mode: A Storage Driver can be implemented to store data in conventional databases like Postgres, etc. It should be very easy for implement this using existing Driver interface.
Both of these options can be implemented after the initial auth flow is implemented.
Third Party Controller
A TillerC TPR controller was designed, prototyped and ultimately abandoned because Kubernetes authZ process can’t extended to fit Tiller’s authorization requirements.
The text was updated successfully, but these errors were encountered: