-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rootful container, NFS volume: lchown operation not permitted #14766
Comments
Thank you for reaching out, @berndbausch! |
lchown is not going to be allowed on the server side of an NFS connection. The server does not understand user namespace so from its point of view it sees berndbausch trying to chown a file to a different UID, and prevents it. NFS Server also does not respect Namespaces Capabilities like CAP_CHOWN. I am not sure why POdman is attempting to the chown, one potential reason would be: |
Could you strace podman to see what UIDs it is attempting to chown too? Perhaps this is Podman trying to chown to the current user, when it does not need to and NFS blocs that. Don't know why it works later. Maybe we only chown on a new volume the first time it is used after creation. |
yes, I think we chown only the first time a volume is created. I don't think there is anything we can do from Podman, should we move this issue to a discussion (or close it)? |
The question I have is are we chowning without first checking if the file is owned by the current user. If yes then we could fix podman to stat the dir before we attempt to chown it, and only chown if necessary. |
Here is the trace, generated with
It doesn't look like it contains the desired information (i.e. the user/group that Podman wants to set the volume to). Is there an strace option I should add? |
NFS Servers will thrown ENOTSUPP error if you attempt to chown a directory to the same UID and GID as the directory already has. If volumes are stored on NFS directories this throws an ugly error and then works on the next try. Bottom line don't chown directories that already have the correct UID and GID. Fixes: containers#14766 [NO NEW TESTS NEEDED] Difficult to setup an NFS Server in testing. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
NFS Servers will thrown ENOTSUPP error if you attempt to chown a directory to the same UID and GID as the directory already has. If volumes are stored on NFS directories this throws an ugly error and then works on the next try. Bottom line don't chown directories that already have the correct UID and GID. Fixes: containers#14766 [NO NEW TESTS NEEDED] Difficult to setup an NFS Server in testing. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
NFS Servers will thrown ENOTSUPP error if you attempt to chown a directory to the same UID and GID as the directory already has. If volumes are stored on NFS directories this throws an ugly error and then works on the next try. Bottom line don't chown directories that already have the correct UID and GID. Fixes: containers#14766 [NO NEW TESTS NEEDED] Difficult to setup an NFS Server in testing. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
After setting up an NFS-based volume, I try to launch a container. This fails with an error "lchown: ... operation not permitted". When I repeat the container launch command, it succeeds.
Steps to reproduce the issue:
Create an NFS-based volume
$ sudo podman volume create mynfs --driver local --opt type=nfs --opt o=rw --opt device=192.168.1.16:/srv/nfs
Launch container
$ sudo podman run -it --rm -v mynfs:/myvol alpine sh Error: lchown /var/lib/containers/storage/volumes/mynfs/_data: operation not permitted
Launch container again
$ sudo podman run -it --rm -v mynfs:/myvol alpine sh / #
Describe the results you received:
After creating the volume, the first attempt to launch a container fails. All subsequent attempts succeed.
Describe the results you expected:
All container launches succeed, including the first one.
Additional information you deem important (e.g. issue happens only occasionally):
As far as I can tell, this problem occurs consistently. I tried a different image (fedora) with the same result.
Same result for Podman 3.4.4 on Ubuntu 22.04 and Podman 4.0.3 on Fedora 35.
Output of
podman version
:Output of
podman info --debug
(Ubuntu only):Package info (e.g. output of
rpm -q podman
orapt list podman
) (Ubuntu only):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
The most recent Podman version I tried is 4.0.3.
I did check the Troubleshooting guide.
Additional environment details (AWS, VirtualBox, physical, etc.):
My Ubuntu host is physical (HP Thin Client t630 running Ubuntu 22.04)
My Fedora 35 host runs on VirtualBox on Windows, bridged network.
My NFS server is physical (HP HP Thin Client t520 running Debian 11).
NFS server setup:
NFS client:
When the container launch succeeds, I see the NFS filesystem mounted as expected (same output on the Fedora and Ubuntu servers, except for the clientaddr obviously):
The text was updated successfully, but these errors were encountered: