-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executor support via Kubelet API (in lieu of docker) #902
Comments
Had some more internal discussions about this, and we feel there's a path forward where we can use k8s APIs to achieve much of the same functionality we are currently getting with docker.sock access. Admittedly, relying on docker.sock is not ideal and will become less acceptable as more clusters are configured with tighter security and OPA, such as in #942. Here would be the alternatives to the interface needing to be implemented:
|
Would this mean that argo wouldn't work in cri-o/containerd installs? |
Actually the opposite. By moving up the stack to Kubelet and/or Kubernetes API Server, it will accommodate all runtimes. @JulienBalestra reached out and mentioned he has an upcoming kubelet implementation to contribute. In the end, I think we just may need to provide different options since there are tradeoffs to each approach. docker - least secure, scalable, but does not support any other runtimes. This potentially can be deprecated and/or replaced with a kubelet implementation. kubelet - I'm still trying to understand the security implications, how kubelet authorizes and works with K8s RBAC, but definitely more secure than docker, supports all runtimes, scalability would be similar to docker, since it would communicate to own node. k8s API - secure, supports all runtimes, least scalable since workflow pods will be copying artifacts, retrieving logs through the API server, which would then become a bottleneck. |
I hope I'll give enough details to help on this. The kubelet can proceed to a (service account) bearer token validation with the webhook feature. This is properly covered by the RBAC like: kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubelet
rules:
- apiGroups: [""]
resources: ["nodes/log", "nodes/proxy"]
verbs: ["get", "list"] The kubelet authNZ is documented here. The challenges I see in this implementation is the communication with the kubelet endpoint: 1) find the endpoint
2) valid certificates or assume an insecure HTTPS behaviorThis could become more complex because there are a lot of possible setups:
The kubernetes approach should be easier but least scalable as @jessesuen mentioned The APIServer will have more pressure from:
By default these exec calls go through the apiserver unless the setup is configured to upgrade the websocket connection directly to the container runtime, but it's rarely the case, I suppose 🤔 The I think we can agree on providing the kubelet and the kubernetes approach. |
Thanks for the explanation @JulienBalestra. This helps Also relevant: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/ |
For security/audit purposes, it would be desirable to limit use of kubelet to just the performance intensive read-only operations (GetFileContents, CopyFile, GetOutput) and use api server for the rest. Perhaps we should consider implementing our own service, which could proxy kubelet, to support these operations. Then we could be more independent of kubelet and can better limit privileges to just the ones we need. |
@edlee2121 it's still possible to get some auditing with the apiserver SubjectAccessReview calls from the kubelet itself. Like: {
"kind": "SubjectAccessReview",
"apiVersion": "authorization.k8s.io/v1beta1",
"metadata": {
"creationTimestamp": null
},
"spec": {
"resourceAttributes": {
"verb": "get",
"version": "v1",
"resource": "nodes",
"subresource": "proxy",
"name": "$nodeName"
},
"user": "system:serviceaccount:default:argo",
"group": [
"system:serviceaccounts",
"system:serviceaccounts:default",
"system:authenticated"
],
"uid": "46ea9cea-a0c2-11e8-b0b8-5404a66983a9"
},
"status": {
"allowed": true,
"reason": "allowed by ClusterRoleBinding \"argo-admin\" of ClusterRole \"cluster-admin\" to ServiceAccount \"argo/default\""
}
} |
@JulienBalestra Cool! |
As #952 is merged, I'll try to make progress on the pure Kubernetes API integration. |
Implemented in 5739436 @JulienBalestra let's create a new issue for it. I'll go ahead and close this one. Thanks again! |
Is this a BUG REPORT or FEATURE REQUEST?: FEATURE REQUEST
What happened:
We need to support other runtimes other than docker (e.g. containerd). The container runtime in the executor has already been abstracted into an interface in anticipation for this need. The work here is to explore the undocumented kubelet API and implement the following interface using the kubelet API:
The workflow-controller-configmap will need to have an option to use kubelet, and
workflowpod.go
will need to consult this setting when constructing the pod spec (skipping the docker.sock volume mounts)The text was updated successfully, but these errors were encountered: