-
Notifications
You must be signed in to change notification settings - Fork 920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl debug: profile "sysadmin" does not work as expected when uid != 0 is specified #1650
Comments
@mochizuki875 what do you think about this?. |
@ardaguclu When apiVersion: v1
kind: Pod
metadata:
labels:
run: privileged
name: privileged
spec:
containers:
- image: busybox
command: ["sh", "-c", "sleep infinity"]
name: privileged
securityContext:
privileged: true
terminationGracePeriodSeconds: 0 $ kubectl exec -it privileged -- /bin/sh
/ # whoami
root
/ # grep Cap /proc/1/status
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000 On the other hand, when non-root user is specified in apiVersion: v1
kind: Pod
metadata:
labels:
run: runasuser-with-privileged
name: runasuser-with-privileged
spec:
securityContext:
runAsUser: 1000
containers:
- image: busybox
command: ["sh", "-c", "sleep infinity"]
name: runasuser-with-privileged
securityContext:
privileged: true
terminationGracePeriodSeconds: 0 $ kubectl exec -it runasuser-with-privileged -- /bin/sh
~ $ whoami
whoami: unknown uid 1000
~ $ grep Cap /proc/1/status
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000 I have not checked the details yet, but this issue has been reported in #56374, and KEP #2763 has been proposed. So currently, I think the simplest workaround is to define apiVersion: v1
kind: Pod
metadata:
labels:
run: test-pod
name: test-pod
spec:
# securityContext:
# runAsUser: 1000 # Override user != 0
containers:
- image: kennethreitz/httpbin
name: test-pod
securityContext:
runAsUser: 1000 # Override user != 0
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always $ kubectl debug test-pod -it --image nicolaka/netshoot --profile=sysadmin -- zsh
Defaulting debug container name to debugger-dwt84.
If you don't see a command prompt, try pressing enter.
test-pod ~ whoami
root
test-pod ~ grep Cap /proc/$$/status
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000 or using Custom Profile.
securityContext:
runAsUser: 0
privileged: true $ kubectl debug test-pod -it --image=busybox --custom=profile-runas-root.yaml -- /bin/sh
Defaulting debug container name to debugger-mjp6g.
If you don't see a command prompt, try pressing enter.
/ # grep Cap /proc/$$/status
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
/ # exit
$ kubectl get pod test-pod -o=jsonpath='{.spec.ephemeralContainers[0].securityContext}' | jq .
{
"privileged": true,
"runAsUser": 0
} Another solution which I come up with is to set |
Thanks a lot for your extensive investigation @mochizuki875 and it is really helpful. I think we should wait this KEP kubernetes/enhancements#2763 is revived and the suggested workaround to overcome this issue is using custom profiling ^^. /triage accepted |
@ardaguclu |
There are workarounds, but surely a user (of kubectl) should not have to do all that to simply achieve a privileged ephemeral container to run as UID 0 (or any other). The running pods and their
If the ephemeral container does not have anything else in its spec that is totally reasonable. And talking about |
The workaround using custom profiles is fine for me, but as @frittentheke already said: I totally agree with the fact, that the client side profile Maybe we could at least implement a client side warning if the Pod spec contains |
Thank you @frittentheke and @Phil1602 for dropping your valuable comments.
That's exactly what I was thinking about. |
Ok, I'll do that and create PR. /assign |
@mochizuki875 I think, we can recommend the possible custom profiling configuration we discussed ^^ in this warning message. |
@ardaguclu |
What happened:
I wanted to create an ephemeral container with
sysadmin
(ornetadmin
) profile to be able to capture traffic usingtcpdump
using the following command:The ephemeral container is set to
privileged: true
as expected, but the Pod levelsecurityContext
forces the ephemeral container to run as user1000
which is IMO an unwanted behavior for an ephemeral container withsysadmin
profile set.What you expected to happen:
I would expect my ephemeral container with
sysadmin
to be able to capture traffic in any case.On a container level
securityContext
I would not only expectprivileged: true
, but alsorunAsUser: 0
to avoid such user override collisions from pod level. Otherwise: a parameter to override the user for the ephemeral container would help in that regard as well.How to reproduce it (as minimally and precisely as possible):
sysadmin
setAnything else we need to know?:
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: