Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move most packages into internal #3028

Open
cyphar opened this issue Jun 13, 2021 · 13 comments
Open

move most packages into internal #3028

cyphar opened this issue Jun 13, 2021 · 13 comments
Milestone

Comments

@cyphar
Copy link
Member

cyphar commented Jun 13, 2021

At the moment all of our internal packages are importable from anywhere. There are several historical reasons for this:

  • Docker originally used LXC, but then created a Go container runtime as an internal library (libcontainer) which was used by Docker directly. This meant that containers were created by forking Docker and doing the namespace setup immediately, which turns out to not be a great idea. This library was moved to a separate repo, but was still imported by Docker. So most of the library code was being imported by another project. The OCI was created and libcontainer was wrapped in a binary called "runc", which was then moved to the OCI repo. But Docker still used libcontainer for a little while before switching. So there was a long time when we had to have our APIs be external.
  • The internal package system didn't exist until Go 1.4, and it wasn't really widely publicised at the time (I only heard about it a few years ago).

However, there are still modern users of our APIs:

  • Kubernetes (specifically cAdvisor and Kubelet) make use of our cgroup libraries fairly heavily and were the first external users from memory. Quite a few other projects also use our cgroup libraries.
  • Docker still imports a handful of our APIs, some of which only really exist for Docker's sake and aren't even used within runc (HostDevices comes to mind).

So we can't just move everything into an internal package but we really should move most of it. We do have some users using libcontainer as a library but it is fairly scary because it is fairly easy to misconfigure containers if you use the Process APIs for instance. We should move as many libraries as possible behind an internal package.

This will also make it easier to explain our SemVer policy -- because right now I would argue that SemVer doesn't cover our Go APIs, but I imagine some users are not aware of that. Putting most of libcontainer inside an internal package would solve this problem entirely.

@cyphar cyphar added this to the 1.1.0 milestone Jun 13, 2021
@dims
Copy link
Contributor

dims commented Jun 13, 2021

taking stock, as of today here's what kubernetes is importing:

[dims@dims-a01 19:14] ~/go/src/k8s.io/kubernetes ⟩ rg opencontainers/runc | grep -v vendor | grep "\.go:" | grep -oh "\".*\"" | sort | uniq
"github.com/opencontainers/runc/libcontainer/apparmor"
"github.com/opencontainers/runc/libcontainer/cgroups"
"github.com/opencontainers/runc/libcontainer/cgroups/fs"
"github.com/opencontainers/runc/libcontainer/cgroups/fs2"
"github.com/opencontainers/runc/libcontainer/cgroups/fscommon"
"github.com/opencontainers/runc/libcontainer/cgroups/systemd"
"github.com/opencontainers/runc/libcontainer/configs"
"github.com/opencontainers/runc/libcontainer/devices"
"github.com/opencontainers/runc/libcontainer/utils"

@dims
Copy link
Contributor

dims commented Jun 13, 2021

and here's what cadvisor is importing:

[dims@dims-a01 19:15] ~/go/src/github.com/google/cadvisor ⟩ rg opencontainers/runc | grep -v vendor | grep "\.go:" | grep -oh "\".*\"" | sort | uniq
"github.com/opencontainers/runc/libcontainer"
"github.com/opencontainers/runc/libcontainer/cgroups"
"github.com/opencontainers/runc/libcontainer/cgroups/fs"
"github.com/opencontainers/runc/libcontainer/cgroups/fs2"
"github.com/opencontainers/runc/libcontainer/configs"
"github.com/opencontainers/runc/libcontainer/intelrdt"

@kolyshkin
Copy link
Contributor

Let's see if we can remove anything but libcontainer/cgroups...

taking stock, as of today here's what kubernetes is importing:

"github.com/opencontainers/runc/libcontainer/apparmor"

The only use is apparmor.IsEnabled().

"github.com/opencontainers/runc/libcontainer/configs"

This has some cgroup-related definitions; maybe we can move it to libcontainer/cgroups, leaving backward-compatible aliases in libcontainer/configs for smoother transition.

Same for cadvisor.

"github.com/opencontainers/runc/libcontainer/devices"

This should go away once runc 1.0.0 GA is released (see kubernetes/kubernetes@9a86d92)

"github.com/opencontainers/runc/libcontainer/utils"

The only use is

func getCgroupV1Path(cgroupPath string) (string, error) {
        cgroupPath = libcontainerutils.CleanPath(cgroupPath)

(and I admit that despite staring at cgroups code for quite a long time I don't fully understand yet why CleanPath is ever needed)

Now for cadvisor, the only pkg that has left is

"github.com/opencontainers/runc/libcontainer/intelrdt"

this is very similar to cgroups and I guess we'll have to keep it public.

@kolyshkin
Copy link
Contributor

docker, cri-o, buildah, podman etc uses a lot of stuff from libcontainer/devices, libcontainer/user and so on -- seems that kubernetes and cadvisor is just the tip of the iceberg.

@cyphar
Copy link
Member Author

cyphar commented Jun 15, 2021

AFAIK the libcontainer/devices and libcontainers/user stuff that Docker et al uses is fairly minor -- libcontainer/user is kinda needed by higher-level runtimes so we'd need to export it no matter what, while only a few bits of devices are needed (mainly HostDevices and a few other functions that only existed for the purposes of Docker but are in libcontainer because of historical reasons).

@cyphar
Copy link
Member Author

cyphar commented Jun 15, 2021

(and I admit that despite staring at cgroups code for quite a long time I don't fully understand yet why CleanPath is ever needed)

It was mostly an abundance of caution because we had cases where lexical path attacks were very trivial against runc, so we added CleanPath to every unsafe path. Once we switch to libpathrs, or move to openat2 hardended paths then CleanPath (and SecureJoin) won't be necessary anymore.

@kolyshkin
Copy link
Contributor

kolyshkin commented Jul 28, 2021

I tried it in #3049 but I do need some input from @opencontainers/runc-maintainers on what the internal path should be (see #3049 (comment) and the following discussion).

Basically, we have three options (under opencontainer/runc/ prefix):

  1. internal/ (e.g. internal/logs)
  2. libcontainer/internal/ (e.g. libcontainer/internal/logs)
  3. internal/libcontainer/ (e.g. internal/libcontainer/logs)

My #3049 implemented approach number 1, while @hqhq thinks it's better to do number 2. Apparently, we need more opinions on that to move forward. Please speak up your mind.

Surely this is not limited to the above three options -- some other variants (e.g. pkg/internal/logs) can be considered as well.

@kolyshkin kolyshkin modified the milestones: 1.1.0, 1.2.0 Sep 9, 2021
@kolyshkin
Copy link
Contributor

OK, this is obviously not ready for 1.1; moving to 1.2. Would appreciate some input from other @opencontainers/runc-maintainers on this.

@junnplus
Copy link

junnplus commented Jun 27, 2022

Although not related to this issue, I think we can move the main package into ./cmd/runc to keep the root path clean.

I opened a PR #3521 for it, WDYT.

@kolyshkin
Copy link
Contributor

Moving this to 1.3.0 due to lack of input from @opencontainers/runc-maintainers wrt #3028 (comment)

@kolyshkin
Copy link
Contributor

OK, as a small step forward, we now have internal/ (added by #4099).

@rata
Copy link
Member

rata commented Oct 21, 2024

Kubernetes is discussing dropping the runc imports here: kubernetes/kubernetes#128157. Some runc maintainers were tagged there.

I'd like to align what we want for runc here. What do others think?

I have mixed feelings with this and I'm not aware of the history (and maybe even if it caused a lot of pain in the past). On one hand, I think kubernetes, docker, cadvisor, etc. are very important users and we need to do the cgroup handling anyways, so we might as well try to move as much as possible to internal and polish the API that we want to expose for them. It's some work, but it seems completely doable.

On the other hand, it seems containerd is already exposing cgroup handling as a standalone package (https://github.com/containerd/cgroups) and that might get almost all of what they need with that, so it doesn't sounds bad either. They are also considering to just fork our current libcontainer package, which IMHO might make sense but I'm not so sure it is a good outcome (it might, I haven't made my mind on that yet).

But I'm lacking the history, to know what was painful and whatnot. What do others think?

@kolyshkin
Copy link
Contributor

I am very much against forking libcontainer. I have seen parts of lxc, docker etc forked many times, and every time it happens some bugs are fixed in one place but not another, and I'm the one which needs the other fixed.

I've outlined my vision there: kubernetes/kubernetes#128157 (comment)

From our side, we need to decouple libcontainer/cgroups from the rest of it (mostly this is moving parts of libcontainer/configs used by cgroups to be a part of libcontainer/cgroup), move libct/cg to a separate repo and use it from there.

A semi-related task is to sunset runc events --stats and remove GetStats methods from cgroups.Manager to a separate package (e.g. cgroupstats which will use cgroups for everything else but add Stats methods).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants