Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Kubelet --authorization-mode=Webhook #5176

Closed
jmthvt opened this issue May 17, 2018 · 15 comments
Closed

Support Kubelet --authorization-mode=Webhook #5176

jmthvt opened this issue May 17, 2018 · 15 comments

Comments

@jmthvt
Copy link
Contributor

jmthvt commented May 17, 2018

The ability to set --authorization-mode=Webhook for kubelet in the cluster specs.
Currently, setting anonymous-auth=false for kubelet switches it to cert auth. We need --authorization-mode=Webhook in order to allow serviceaccount tokens to communicate with kubelet.

This would for example fix the Prometheus kubelet exporter, which currently returns server returned HTTP status 401 Unauthorized on a Kops cluster with anonymous-auth=false.

I see there is already a flag for this https://github.com/kubernetes/kops/blob/release-1.9/pkg/apis/kops/componentconfig.go#L28

But this is not really supported by kops yet, many things would break.

@krogon
Copy link

krogon commented May 24, 2018

The remedy for kubelet is to switch to scrape metrics from https to http port which does not requires authorization (for /metrics endpoint).
Equivalent prometheus-operator chart value is exporter-kubelets.https=false.

Regarding the authorization-mode flag as you mentioned it is available from kops 1.9.0. The missing peace could be in in out-of-the-box RBAC rules. Users had reported lack of below permissions within system:node:

- apiGroups:
  - ""
  resources:
  - nodes/proxy
  verbs:
  - create
  - get

source: #3891 (comment)

@jmthvt
Copy link
Contributor Author

jmthvt commented May 30, 2018

Thanks @krogon , switching to http fixed prometheus for now.

@onyxet
Copy link

onyxet commented Jul 4, 2018

@jeyglk Also you can pass configuration argument to kubelet --authentication-token-webhook=true This flag enables, that a ServiceAccount token can be used to authenticate against the kubelet(s). to resolve this issue. But this flag is supported in next version of kops (higher than v1.9.6)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 2, 2018
@devyn
Copy link

devyn commented Oct 16, 2018

/remove-lifecycle stale

I've found that I still have to bind the appropriate nodes permissions to the kubelet-api user in order to resolve this. A ClusterRoleBinding of the system:kubelet-api-admin role already provided by Kubernetes to the kubelet-api user seems to be sufficient, though.

I'm using the following config for kubelet:

kubelet:
  anonymousAuth: false
  authenticationTokenWebhook: true
  authorizationMode: Webhook

kops version 1.10.0.

I think it would make sense to just bind kubelet-api to system:kubelet-api-admin as part of kops since that role is already provided and appears to be for that purpose, and everything just works once I do that manually (including that it denies access to default serviceaccounts and unauthenticated users to the kubelet, as I want)

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 16, 2018
@mazzy89
Copy link
Contributor

mazzy89 commented Dec 13, 2018

same as you @devyn I'm using the following configuration:

kubelet:
    anonymousAuth: false
    authorizationMode: Webhook
    authenticationTokenWebhook: true

together with kube-prometheus.

I have the ClusterRole system:kubelet-api-admin:

$ kubectl get clusterrole system:kubelet-api-admin -o yaml                                                                              
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2018-12-13T10:45:49Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kubelet-api-admin
  resourceVersion: "60"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Akubelet-api-admin
  uid: 40f9fe23-fec4-11e8-bad8-0ed9e9ae5b3c
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - proxy
- apiGroups:
  - ""
  resources:
  - nodes/log
  - nodes/metrics
  - nodes/proxy
  - nodes/spec
  - nodes/stats
  verbs:
  - '*'

however when I try to proxy like this I get an error:

$ kubectl port-forward svc/grafana 3000                                                                                             \
error: error upgrading connection: unable to upgrade connection: Forbidden (user=kubelet-api, verb=create, resource=nodes, subresource=proxy)

@markine
Copy link

markine commented Dec 13, 2018

@mazzy89 You're close, you also need a ClusterRoleBinding. Here's what I use to get logs/exec to work:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubelet
rules:
- apiGroups:
  - ""
  resources:
  - nodes/stats
  - nodes/metrics
  - nodes/log
  - nodes/spec
  - nodes/proxy
  verbs:
  - create
  - get
  - update
  - patch
  - delete

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-api
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubelet-api

@markine
Copy link

markine commented Dec 13, 2018

@mazzy89
Copy link
Contributor

mazzy89 commented Dec 13, 2018

I had to simply add the ClusterRoleBInding. now it works. Thank you @markine

@jurgenweber
Copy link

a bit off kilter here, but kops 1.11 recommends to set anonymousAuth: false but before I do I would like to review what is currently authenticating anonymously.... Any good way to do this? There is this issue with Prom cool, but I wonder what else will get broken.

@jhohertz
Copy link
Contributor

@jurgenweber : I'm not sure there is a good way to audit anon access. That was one of the things that was mentioned in relation to the CVE that prompted the recommendation to disable anon auth, was that there was little visibility into what might have exploited it.

Main impact as I saw was around metrics-server's API aggregation needing to authenticate, and the need to enable this webhook auth mode, and related RBAC stuff which I think the metrics-server helm chart now incorporates.

@vietwow
Copy link

vietwow commented Feb 19, 2019

@jhohertz Could you please to share what exactly related RBAC have you added ? I also meet this problem with metrics-server. Thanks.

Best Regards,
VietNC

@jhohertz
Copy link
Contributor

@vietwow: So there were two bits done... one was adding this (as seen in chart now): https://github.com/helm/charts/blob/master/stable/metrics-server/templates/aggregated-metrics-reader-cluster-role.yaml

And the other was to give kubelet-api access, per: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubelet
rules:
- apiGroups:
  - ""
  resources:
  - nodes/stats
  - nodes/metrics
  - nodes/log
  - nodes/spec
  - nodes/proxy
  verbs:
  - create
  - get
  - update
  - patch
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-api
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubelet-api

@rdrgmnzs
Copy link
Contributor

rdrgmnzs commented Apr 9, 2019

The initial request for the --authorization-mode flag was added with PR #4924

/close

@k8s-ci-robot
Copy link
Contributor

@rdrgmnzs: Closing this issue.

In response to this:

The initial request for the --authorization-mode flag was added with PR #4924

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests