Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The podman driver should not require sudo or root #7480

Closed
afbjorklund opened this issue Apr 7, 2020 · 9 comments · Fixed by #7631
Closed

The podman driver should not require sudo or root #7480

afbjorklund opened this issue Apr 7, 2020 · 9 comments · Fixed by #7631
Assignees
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@afbjorklund
Copy link
Collaborator

Even though podman is run with sudo (docker uses a group instead), the driver should not.

✨  Using the podman (experimental) driver based on user configuration
💣  The "podman" driver requires root privileges. Please run minikube using 'sudo minikube --driver=podman'.

This will lead to the same kind of ownership and path issues that is plaguing the none driver...

Instead only the podman commands should be wrapped in sudo (to not try to run as rootless)

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/podman-driver podman driver issues labels Apr 7, 2020
@afbjorklund
Copy link
Collaborator Author

i.e. for the docker driver we require that the user is part of the root-equivalent docker group:

https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user

$ docker ps

but for the podman driver we instead require that the user has passwordless access to podman

$ sudo podman ps

This basically allows the same level of root access as docker, that is you still need to be an admin


Neither driver should try to wrap minikube command in sudo, but run it as a regular user.

Running minikube (or kubernetes) entirely as a non-root user is currently not supported.

See e.g. https://github.com/rootless-containers/usernetes

Running minikube requires a user with enough privileges.

@afbjorklund
Copy link
Collaborator Author

ping @medyagh

@tstromberg
Copy link
Contributor

Related thread: https://twitter.com/rawkode/status/1239885013241470979

We can detect the necessity to run root via:

podman info | grep rootless

@priyawadhwa priyawadhwa added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 8, 2020
@afbjorklund
Copy link
Collaborator Author

@tstromberg:

We can detect the necessity to run root via:

podman info | grep rootless

We don't support running the docker-in-podman with rootless podman, only with the regular root one.

Eventually we might default to crio-in-podman, but for now that requires using --container-runtime

@afbjorklund
Copy link
Collaborator Author

Now works OK, when running the docker container runtime.

$ ./out/minikube kubectl get nodes
E0413 17:36:02.552882   31952 api_server.go:169] unable to get freezer state: cat: /sys/fs/cgroup/freezer/libpod_parent/libpod-a2ebc4baa2e317a42afb6ff01d07fab6ec608a5ab2893425e6e58b8203748c3d/kubepods/burstable/pod6c3eb4fb6e1f774f1248d50ac61483b5/076cdc3a21654ba0d67ccf4a21af54e7f2a31a3edaf0d719947635c36586f55a/freezer.state: No such file or directory
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   20s   v1.18.0
$ ./out/minikube kubectl -- get pods --all-namespaces
E0413 17:36:23.219211   32401 api_server.go:169] unable to get freezer state: cat: /sys/fs/cgroup/freezer/libpod_parent/libpod-a2ebc4baa2e317a42afb6ff01d07fab6ec608a5ab2893425e6e58b8203748c3d/kubepods/burstable/pod6c3eb4fb6e1f774f1248d50ac61483b5/076cdc3a21654ba0d67ccf4a21af54e7f2a31a3edaf0d719947635c36586f55a/freezer.state: No such file or directory
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-4sktx           1/1     Running   0          30s
kube-system   coredns-66bff467f8-fkgt9           1/1     Running   0          30s
kube-system   etcd-minikube                      1/1     Running   0          31s
kube-system   kube-apiserver-minikube            1/1     Running   0          31s
kube-system   kube-controller-manager-minikube   1/1     Running   0          31s
kube-system   kube-proxy-49sxp                   1/1     Running   0          31s
kube-system   kube-scheduler-minikube            1/1     Running   0          31s
kube-system   storage-provisioner                1/1     Running   0          35s

Hangs when trying to use containerd or crio, maybe CNI ?


I think we might want to hide the cgroups thing (freezer).
Looks like the kicbase only mounts it for the docker runtime.

https://github.com/kubernetes-sigs/kind/blob/master/images/base/files/usr/local/bin/entrypoint#L48

Or something similar, anyway the subdirectory is not available.
i.e. /sys/fs/cgroup/freezer/libpod_parent does not exist.

Only:

/sys/fs/cgroup/memory/libpod_parent
/sys/fs/cgroup/devices/libpod_parent
/sys/fs/cgroup/pids/libpod_parent
/sys/fs/cgroup/cpu,cpuacct/libpod_parent
/sys/fs/cgroup/blkio/libpod_parent
/sys/fs/cgroup/systemd/libpod_parent

@afbjorklund afbjorklund self-assigned this Apr 13, 2020
@afbjorklund afbjorklund linked a pull request Apr 13, 2020 that will close this issue
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Apr 14, 2020

I think the current podman behaviour is actually a layer bug...
First I thought it was a volume bug, but it is not entirelly empty:

podman var volume lib/dpkg/alternatives:

awk  pager  rmt  w

docker var volume lib/dpkg/alternatives:

arptables  awk	ebtables  ip6tables  iptables  pager  pinentry	rcp  rlogin  rmt  rsh  w

So it has the first few items, from the low ubuntu layers.
But not the added files, the the added kicbase layers.


And then there is that UX bug, not being able to add mount options for any anonymous volumes.

Error: invalid container path "exec", must be an absolute path

Plus the main difference that any mount in podman containers starts out as noexec,nodev,nosuid

But I suppose we could give it a name, instead of a cache path like now ?

@afbjorklund
Copy link
Collaborator Author

I went back to /var - it seems it was the path mount that was causing it

With a regular named mount, the image boots up just fine - no missing files

@prateeksahu
Copy link

prateeksahu commented May 8, 2020

Hi @afbjorklund , I am trying to run minikube with podman on RHEL 8.1 and I cannot run with podman driver without sudo. Is there a workaround to not wrap my minikube (and subsequent kubectl) commands in sudo and use the driver?

@afbjorklund
Copy link
Collaborator Author

Hi @afbjorklund , I am trying to run minikube with podman on RHEL 8.1 and I cannot run with podman driver without sudo. Is there a workaround to not wrap my minikube (and subsequent kubectl) commands in sudo and use the driver?

Can you open up a new report about it ? We have tried with Fedora 32, but not with RHEL 8 yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants