Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dockerd: support for overlay storage driver #25396

Open
domasgim opened this issue Nov 18, 2024 · 2 comments
Open

dockerd: support for overlay storage driver #25396

domasgim opened this issue Nov 18, 2024 · 2 comments

Comments

@domasgim
Copy link

Maintainer: @G-M0N3Y-2503
Environment: x86/64 (VM), OpenWrt SNAPSHOT r25603-1a47ce5ff2

Description:

With the default provided config of docker it seems to default to overlay2 storage driver which, according to the docs, is the recommended option for most environments but in reality dockerd on OpenWRT changes to VFS.

config_get data_root globals data_root "/opt/docker/"

root@OpenWrt:~# cat /tmp/dockerd/daemon.json
{ "data-root": "\/opt\/docker\/", "log-level": "warn", "iptables": true 

When launching dockerd through init.d the following error messages can be seen:

...
Mon Nov 18 08:44:43 2024 kern.err kernel: [  767.873468] overlayfs: filesystem on '/opt/docker/check-overlayfs-support1084625961/upper' not supported as upperdir
...
Mon Nov 18 08:44:44 2024 daemon.err dockerd[8718]: time="2024-11-18T08:44:44.134747341Z" level=warning msg="WARNING: No swap limit support"

It seems that dockerd does not support overlay FS on top of an already existing overlay FS which is the usual case for OpenWRT environment according to the wiki. I have found someone already mentioning this on stack overflow here but according to kernel docs "The lower filesystem can even be another overlayfs". I would suppose dockerd still does not support such feature anyway.

root@OpenWrt:~# df -h /opt/docker/
Filesystem                Size      Used Available Use% Mounted on
overlayfs:/overlay      955.4M    174.1M    781.2M  18% /

Dockerd defaults to VSF storage driver which is less efficient and not recommended according to docker docs: "Each layer is a directory on disk, and there is no copy-on-write support. To create a new layer, a "deep copy" is done of the previous layer. This leads to lower performance and more space used on disk than other storage drivers..."

There is also a no swap limit warning but I haven't looked into the reason and causes behind it.

root@OpenWrt:~# docker info
Client:
 Version:    25.0.3
 Context:    default
 Debug Mode: false

Server:
 Containers: 3
  Running: 0
  Paused: 0
  Stopped: 3
 Images: 2
 Server Version: 25.0.3
 Storage Driver: vfs       <----------- VFS, not overlay
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version:
 runc version:
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.82
 Operating System: OpenWrt SNAPSHOT
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 984.9MiB
 Name: OpenWrt
 ID: 47e7f5c2-6707-4888-b6b3-c93a906d7cbc
 Docker Root Dir: /opt/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support     <-------------- Warning

Main question

What would be some of possible solutions to make it so dockerd uses overlayFS as it's storage driver instead of VFS?

Overlayfs testing

I tried to work around this situation by creating a new directory directly under /overlay/ (which I've heard is never recommended!)

root@OpenWrt:/overlay/my_docker_root# df -h /overlay/my_docker_root
Filesystem                Size      Used Available Use% Mounted on
/dev/loop0              955.4M    174.2M    781.2M  18% /overlay
root@OpenWrt:~# cat docker-daemon.json
{ "data-root": "\/overlay\/my_docker_root\/", "log-level": "info", "iptables": true }

I have managed to at least run docker run hello-world which seems to be working fine as well as docker info shows that the storage driver is overlay

root@OpenWrt:/overlay/my_docker_root# docker info
Client:
 Version:    25.0.3
 Context:    default
 Debug Mode: false

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 1
 Server Version: 25.0.3
 Storage Driver: overlay2
  Backing Filesystem: f2fs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version:
 runc version:
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.82
 Operating System: OpenWrt SNAPSHOT
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 984.9MiB
 Name: OpenWrt
 ID: 60e595d3-5b5c-492a-8197-7e0d6ad3be6a
 Docker Root Dir: /overlay/my_docker_root
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Also here is dockerd output:

root@OpenWrt:~# /usr/bin/dockerd --config-file=/root/docker-daemon.json
INFO[2024-11-18T09:48:17.272146391Z] Starting up
INFO[2024-11-18T09:48:17.272648620Z] containerd not running, starting managed containerd
INFO[2024-11-18T09:48:17.283518155Z] starting containerd                           revision= version=1.7.13
INFO[2024-11-18T09:48:17.283998797Z] started new containerd process                address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=10381
INFO[2024-11-18T09:48:17.297392220Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.308866821Z] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 255 \"\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.308977991Z] loading plugin "io.containerd.event.v1.exchange"...  type=io.containerd.event.v1
INFO[2024-11-18T09:48:17.309003960Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2024-11-18T09:48:17.309047469Z] loading plugin "io.containerd.warning.v1.deprecations"...  type=io.containerd.warning.v1
INFO[2024-11-18T09:48:17.309363895Z] loading plugin "io.containerd.snapshotter.v1.blockfile"...  type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.309522817Z] skip loading plugin "io.containerd.snapshotter.v1.blockfile"...  error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.309562583Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.309758708Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /overlay/my_docker_root/containerd/daemon/io.containerd.snapshotter.v1.btrfs (f2fs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.309851773Z] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2024-11-18T09:48:17.309905780Z] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2024-11-18T09:48:17.309930824Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.310049891Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.310216671Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.310351320Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /overlay/my_docker_root/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2024-11-18T09:48:17.310429967Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2024-11-18T09:48:17.310525595Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2024-11-18T09:48:17.310620363Z] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2024-11-18T09:48:17.310716322Z] metadata content store policy set             policy=shared
INFO[2024-11-18T09:48:17.360654501Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2024-11-18T09:48:17.360899629Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2024-11-18T09:48:17.361256625Z] loading plugin "io.containerd.lease.v1.manager"...  type=io.containerd.lease.v1
INFO[2024-11-18T09:48:17.361607521Z] loading plugin "io.containerd.streaming.v1.manager"...  type=io.containerd.streaming.v1
INFO[2024-11-18T09:48:17.361849547Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2024-11-18T09:48:17.362230503Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2024-11-18T09:48:17.362948045Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
ERRO[2024-11-18T09:48:17.363359355Z] cleanup working directory in namespace        error="open /overlay/my_docker_root/containerd/daemon/io.containerd.runtime.v2.task/moby: no such file or directory" namespace=moby
INFO[2024-11-18T09:48:17.363649852Z] loading plugin "io.containerd.runtime.v2.shim"...  type=io.containerd.runtime.v2
INFO[2024-11-18T09:48:17.363877767Z] loading plugin "io.containerd.sandbox.store.v1.local"...  type=io.containerd.sandbox.store.v1
INFO[2024-11-18T09:48:17.364030704Z] loading plugin "io.containerd.sandbox.controller.v1.local"...  type=io.containerd.sandbox.controller.v1
INFO[2024-11-18T09:48:17.364094281Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364184092Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364248127Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364320932Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364386677Z] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364442707Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364507026Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364557878Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2024-11-18T09:48:17.364634172Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.364692408Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.364756531Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.364811464Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.364874868Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.364922092Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.364989753Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365039121Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365110415Z] loading plugin "io.containerd.grpc.v1.sandbox-controllers"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365170412Z] loading plugin "io.containerd.grpc.v1.sandboxes"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365248833Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365324571Z] loading plugin "io.containerd.grpc.v1.streaming"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365399413Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365479040Z] loading plugin "io.containerd.transfer.v1.local"...  type=io.containerd.transfer.v1
INFO[2024-11-18T09:48:17.365542411Z] loading plugin "io.containerd.grpc.v1.transfer"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365607776Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.365654350Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2024-11-18T09:48:17.365748705Z] loading plugin "io.containerd.tracing.processor.v1.otlp"...  type=io.containerd.tracing.processor.v1
INFO[2024-11-18T09:48:17.365835580Z] skip loading plugin "io.containerd.tracing.processor.v1.otlp"...  error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2024-11-18T09:48:17.365887542Z] loading plugin "io.containerd.internal.v1.tracing"...  type=io.containerd.internal.v1
INFO[2024-11-18T09:48:17.365953001Z] skipping tracing processor initialization (no tracing plugin)  error="no OpenTelemetry endpoint: skip plugin"
INFO[2024-11-18T09:48:17.366070591Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2024-11-18T09:48:17.366142681Z] loading plugin "io.containerd.nri.v1.nri"...  type=io.containerd.nri.v1
INFO[2024-11-18T09:48:17.366193745Z] NRI interface is disabled by configuration.
INFO[2024-11-18T09:48:17.366376471Z] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2024-11-18T09:48:17.366472659Z] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2024-11-18T09:48:17.366575348Z] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2024-11-18T09:48:17.366630556Z] containerd successfully booted in 0.083419s
INFO[2024-11-18T09:48:18.420384464Z] Loading containers: start.
WARN[2024-11-18T09:48:18.543085862Z] Could not load necessary modules for IPSEC rules: protocol not supported
INFO[2024-11-18T09:48:18.561216701Z] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address
INFO[2024-11-18T09:48:18.618374255Z] Loading containers: done.
WARN[2024-11-18T09:48:18.637220983Z] WARNING: No swap limit support
INFO[2024-11-18T09:48:18.637416194Z] Docker daemon                                 commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
INFO[2024-11-18T09:48:18.637815540Z] Daemon has completed initialization
INFO[2024-11-18T09:48:18.812905086Z] API listen on /var/run/docker.sock
INFO[2024-11-18T09:48:37.681911028Z] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[2024-11-18T09:48:37.681951014Z] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
WARN[2024-11-18T09:48:37.729220080Z] Failed to delete conntrack state for 172.17.0.2: invalid argument
time="2024-11-18T09:48:37.802051316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-11-18T09:48:37.802083046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-11-18T09:48:37.802091002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-11-18T09:48:37.802135513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-11-18T09:48:38.158313947Z" level=error msg="failed to enable controllers ([cpuset cpu io memory pids rdma])" error="failed to write subtree controllers [cpuset cpu io memory pids rdma] to \"/sys/fs/cgroup/docker/cgroup.subtree_control\": write /sys/fs/cgroup/docker/cgroup.subtree_control: no such file or directory"
INFO[2024-11-18T09:48:38.188061668Z] ignoring event                                container=576d51d9e2c24c566c9615a870fbac569aea0ebf06e458acf206f2cf135a7f40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2024-11-18T09:48:38.188790052Z] shim disconnected                             id=576d51d9e2c24c566c9615a870fbac569aea0ebf06e458acf206f2cf135a7f40 namespace=moby
WARN[2024-11-18T09:48:38.189550840Z] cleaning up after shim disconnected           id=576d51d9e2c24c566c9615a870fbac569aea0ebf06e458acf206f2cf135a7f40 namespace=moby
INFO[2024-11-18T09:48:38.189628233Z] cleaning up dead shim                         namespace=moby
WARN[2024-11-18T09:48:38.220731386Z] Failed to delete conntrack state for 172.17.0.2: invalid argument

I am not sure if this "work around" is viable though, mainly because /overlay/ directory is used

@G-M0N3Y-2503
Copy link
Contributor

I mention the workaround I used here #11839 (comment).
If I recall correctly, it was discussed somewhere here and the consensus was that users will be configuring their disk's for docker such as using an additional FS on an additional disk.

That said, your aren't the first, so I wonder if there is a better solution.

@G-M0N3Y-2503
Copy link
Contributor

Also, I don't recall the specifics of the workaround, but from what I recall the /overlay/ dir is something like where the overlay FS is built upon. Seemed to work for me when I used it but maybe there is technically a possible issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants