-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support running runc as non/less privileged user #38
Comments
See also https://git.gnome.org/browse/linux-user-chroot/tree/README See discussion about use of |
We just need to reconsider some default mounts for this to work I think, maybe drop |
This allow containers to be used in shared computing environments such as HPCs. Very exciting! |
I would be very interested to find out how/when this is implemented, especially as it may help me create a transferable environment to use in HPC environments, as there I have no sudo and no chance to install docker. |
So science is interested. Now we need enterprise so somebody will actually start working on this ;) |
+1 from HEP (High Energy Physics) community. You can have your hundreds of thousands of cores even with a common operating system like RHEL/CentOS/Scientific Linux, but you still end with Android-like fragmentation because all computing centres do updates on their schedules. When you send your job to various computing centres you also want to provide your container as environment. Preferably that runs as unprivileged container. The container protects you from the fragmentation and you don't get magic differences due for example due to update of libm. HTCondor bash system already have some support for Docker: https://research.cs.wisc.edu/htcondor/HTCondorWeek2015/presentations/ThainG_Docker.pdf |
On Wed, Oct 07, 2015 at 12:27:24PM -0700, davidlt wrote:
Page 29 of those slides shows the host's sysadmin starting a Docker More generally, I'm not sure how this is going to work for Creation of new namespaces using clone(2) and unshare(2) in most So an unprivileged user should be able to create a user namespace, Doing something like making runc setuid-root would be a bad idea, |
We disallow software to be setuid-root or installed as root. I love the way runc is now, seems to be a single capable binary, no need for special accounts, no need for some daemon. The only thing that's missing is ability to use it without root account. |
On Fri, Oct 09, 2015 at 12:23:32AM -0700, davidlt wrote:
Most (and hopefully all ;) setuid-root programs are that way because |
Then maybe the question is: what do we loose if we take away root permissions from runC on RHEL6, RHEL7 and mainline kernels? IIRC LXC supports unprivileged containers on 3.12 and above kernels. Docker should have support for user namespaces in 1.9 according to PR I managed to find. We have ~170 computing centres connected and that's how you achieve high number of cores to process big data. Currently they are running RHEL 6.X/CentOS 6.X/Scientific Linux 6.X. They will be moved to 7.X soonish, I believe. There are a few cases people migrated to 7.X and just use full-system-container with LXC and RHEL 6.X rootfs. Now image (rootfs) and runC binaries distribution to all computing centres is an easy task. At this point I didn't need to involve administrators from all computing sites (no need for special users, no daemons, etc.). But now, I cannot use it because you don't have root permissions. Preference would have everything centralised where you don't have to involve ~170 people to do the right job, which then would take weeks to months to setup. |
To be able to run this as non-privileged user, user namespace is just one of the problems. I think we also need to look at some improvements on the way we are handling cgroups right now as that requires root permission. AFAIK, unprivileged lxc used a privileged cgmanager daemon to handle its own cgroup assignment. |
That's correct.
|
Hello, For scientific computing (where one is running relatively "normal" POSIXy applications) the Common Workflow Language is trying out a solution for rootless containers: https://github.com/common-workflow-language/common-workflow-language/wiki/Userspace-Container-Review#getting-userspace-containers-working-on-ancient-rhel A bit of a hack, but no root, weird kernel, or suituid binary is needed. Obviously one should use a more mature approach, but for the many academic clusters running older kernels this should suffice until they can upgrade. [idea by @mr-c, proof of concept by @kdmurray91] The CWL anxiously awaits a mature and well adopted open containers standard so please steal this idea and run with it :-) |
Couple more interesting projects trying to solve this problem:
|
Note that shifter uses (real) chroot and thus requires root. |
Though Shifter could be adapted to use proot/fakechroot. I quite like their |
I have not used shifter, but their documentation (see On Mon, Feb 15, 2016 at 11:35 AM, Michael R. Crusoe <
|
@chrisfilo Yeah, we thought the same thing, then dug further |
If it requires root what's the point of shifter then? On Mon, Feb 15, 2016 at 11:55 AM, Michael R. Crusoe <
|
from what I see (without running it): scheduler integration (slurm, others), ability to run same image simultaneously across a cluster, caching and management of images |
To return this to @discordianfish original question: proot allows root-free running of "normal" containers (but possibly not some exotic containers). However I wouldn't rely on it for security, but would use it for ease-of-use scenarios. |
I'll make clear something that @mr-c has implied: We need an unprivileged user to be able to do all operations including installation, setup and image management solely within $HOME (or some other unrestricted path), without being root. In other words, this should all be possible without any admin intervention whatsoever. |
I have started a thread on the mailing list here https://groups.google.com/a/opencontainers.org/forum/#!topic/dev/yutVaSLcqWI with my proposed actions to make this a reality |
Above I linked linux-user-chroot, this code has now migrated to https://github.com/projectatomic/bubblewrap |
FYI, bubblewrap is setuid & requires non-privileged user namespaces; which are great when you have them. RHEL6 does not. |
bubblewrap does not require user namespaces - allowing container features to be safely exposed to userspace on kernels which don't have It might be interesting to have runc support mapping JSON configuration to bubblewrap, but in the end over time user namespaces will hopefully be secure enough it'll be a legacy thing. In the meantime though, if anyone is targeting non-userns kernels, bubblewrap might be interesting. |
Hello @cgwalters , Here is my experience trying the bubblewrap demo:
|
@mr-c Did you make bubblewrap setuid? |
@mr-c You need to have either user namespaces, or have the bwrap setuid/setcaps. There is no other way with the current kernel to use namespces. |
@mr-c What distro/kernel are you running on? |
@alexlarsson I understand, that is why I was advocating for |
@mr-c I do want to note that I believe bubblewrap shipped as setuid is safe. Its a very minimal C app with zero dependencies (only libc) that is written with security/setuid in mind. |
Its not like shipping with a setuid runc which lets you own the system. |
Hey @alexlarsson, I'm not at all saying it isn't safe, just that I'm looking for other approaches as setuid binaries aren't acceptable on basically all of the academic/research computing clusters I have run into. |
@mr-c Even if say bubblewrap was in rhel 6.x? |
@alexlarsson Of course, if runc/opencontainers support ships with the OS that they installed then there is no fight :-) |
Oh, I just learned that there is a thread on the mailing list interesecting this conversation: https://groups.google.com/a/opencontainers.org/forum/#!topic/dev/yutVaSLcqWI |
If you are running on RHEL6, how do you get User Namespace support? |
Hello @rhatdan , Is that question directed at me? I'm not personally running RHEL6 on any of my systems, but a sub-thread was about finding a way to run containerized software on academic computing clusters, where RHEL 6 is very common. A proposed solution is in #38 (comment) which does not rely on capabilities, setuid binaries, or user namespace support. Since that post there have been other proposals to use some combination of capabilities, setuid binaries, or user namespace support to enable running runc as non/less privileged use. These won't be usable on academic computing clusters for a year or two. I think it would be great to see both proposals developed and incorporated. |
Ok I have not reviewed the list of proposals. But my bottom line would be to get to rhel7 version if at all possible to work with the latest container technologies. |
@mr-c IMO, it wouldn't make sense to incorporate Most notably, AFAICS Am I missing something? |
@cyphar In the scientific software domain we are primarily using containers to solve software portability concerns, not security. We anticipate, and support, It would be great if there was a built in fallback to support running an otherwise trusted program inside of a |
On Wed, Apr 20, 2016 at 07:02:13AM -0700, Michael R. Crusoe wrote: I think this may be conflating images and running containers. With Obviously, not all runtime-spec configs would work with a |
Long, but this is picture from my point of view. I am successfully using PRoot for some activities on RHEL/CentOS 6. I am even using it with QEMU for emulating POWER8 with Fedora rootfs and ARMv8 with CentOS rootfs. It does work. It is true that RHEL 6 is currently the dominating Linux distribution and hopefully first roadmaps will be announced for migration to RHEL 7 this year (I hope). In my case we are building <400 RPMs (relocatable) which ends up <10GB for a full release. I built everything from glibc, gcc, binutils, llvm, gdb, python, etc. and it has to run on a high number of computing centres. The only common thing is that they have RHEL6/CentOS6/Scientific Linux 6 (binary compatible) installed as OS (required). Installation of our software is centrally controlled via distributed file system which is mounted in each site (this solved some of problems). So, we can make software centrally available at computer centres, but none of that ever depends on root permissions (requirement). Yes, at some point agreement could be made that some solution is required for Linux containers and it has to be provided by all computing centres. This is not a quick procedure. I don't think we need (yet) a strong security guarantee. What we need is ability to control software stack expect kernel. E.g., we don't want to have different physics results because half of computer centres decided to do yum upgrade/update and their glibc (libm) was updated. Thus it is a way to increase reproducibility. We started shipping our glibc once we hit a number of issues with TLS that was blocking our production jobs, but the fixes were back ported only in CentOS 7.2. Thus we had to patch our glibc for a long period. This also unbinds us from migration schedule for operating system in computer centres. We would decide on which rootfs we run. I would love to have ability to run a job within a container, but add hard limits on resources (CPUs and memory). If the job was scheduled on 8-core slot with 16GB of RAM, it should go not outside these boundaries. Currently this is partly done via job scheduler monitoring and virtual memory limit (wrong). There are no strict boundaries as far as I know. These things can be differently done depending on computer centre, no one way of doing it, I guess. In addition to that statistics (networking, CPU, memory, IO, etc) per job would be interesting. Even if job is running multiple processes and does not have native statistics API or similar. This also means there is one command way for acquiring statistics on jobs. Of course, I would prefer to have an industry standard which works in these environments (or at least there are plans), but not to have yet-another-solution-for-Linux-container-like-environment. |
I've heard @jfrazelle wanted to look into this? :) |
@davidlt It isn't currently possible to set cgroup limits in an unprivileged user namespace (that is, if you start as a regular user). So you can't really set the hard limits in that way, which limits you to rlimits that aren't nearly as useful. The same holds for For me, the important question is whether we can use |
Okay, this works on my fork of runC. There are some outstanding things to do, mostly related to giving more meaningful errors to users when their config won't work with a rootless container setup. You can see the code here: https://github.com/cyphar/runc/tree/rootless-containers |
Closing this one so we can use #774 as the main tracking issue for this feature. It has a checklist and everything. |
Closes opencontainers#38 Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
…-4.6 libctr/init_linux: reorder chdir
Right now runc requires to be run as root where technically it should be possible to run containers as unprivileged user (at least if user namespaces are used)
The text was updated successfully, but these errors were encountered: