Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with podman stats command since v2.2.0 #8588

Closed
Lordryte opened this issue Dec 4, 2020 · 12 comments · Fixed by #9110
Closed

Issue with podman stats command since v2.2.0 #8588

Lordryte opened this issue Dec 4, 2020 · 12 comments · Fixed by #9110
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@Lordryte
Copy link

Lordryte commented Dec 4, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

podman stats command no longer works since the last update to podman v2.2.0

Steps to reproduce the issue:

  1. podman run -dit ubuntu

  2. podman stats

Describe the results you received:

podman stats
Error: unable to obtain cgroup stats: open /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/container/memory.current: no such file or directory

Describe the results you expected:

a functioning podman stats dashboard

Additional information you deem important (e.g. issue happens only occasionally):

I had 2 VPS servers running the same configuration with a working podman stats command. When podman was updated to v2.2.0 the command was still running without issue. I started to get the error message once the running command killed and then restarted. The same behaviour was observed on both servers.

Output of podman version:

Version:      2.2.0
API Version:  2.1.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.20, commit: '
  cpus: 10
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: 
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.4.0-52-generic
  linkmode: dynamic
  memFree: 53689827328
  memTotal: 63221174272
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version UNKNOWN
      commit: 3e46dd849fdf6bfa68127786e073318184641f05
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1001/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.7
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 824h 58m 17.19s (Approximately 34.33 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 6
    paused: 0
    running: 6
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
      Version: |-
        fusermount3 version: 3.9.0
        fuse-overlayfs: version 1.1.0
        FUSE library version 3.9.0
        using FUSE kernel interface version 7.31
  graphRoot: /home/podman/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 58
  runRoot: /run/user/1001/containers
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 2.2.0

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 2.2.0~2 amd64 [installed]
podman/unknown 2.2.0~2 arm64
podman/unknown 2.2.0~2 armhf
podman/unknown 2.2.0~2 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

VPS running Ubuntu 20.04

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 4, 2020
@mheon
Copy link
Member

mheon commented Dec 4, 2020

@vrothberg This looks like it could be your cgroup path changes?

@zhangguanzhang
Copy link
Collaborator

please show the result @Lordryte

sudo ls -l /sys/fs/cgroup/user.slice/

sudo ls -l /sys/fs/cgroup/

sudo ls -l /sys/fs/cgroup/systemd/user.slice

sudo ls -l /sys/fs/cgroup/systemd/user.slice/user-1001.slice

sudo ls -l /sys/fs/cgroup/systemd/user.slice/user-1001.slice/user.slice

@Lordryte
Copy link
Author

Lordryte commented Dec 4, 2020

sudo ls -l /sys/fs/cgroup/user.slice/

total 0
-r--r--r--  1 root root 0 Dec  4 04:14 cgroup.controllers
-r--r--r--  1 root root 0 Nov 23 21:56 cgroup.events
-rw-r--r--  1 root root 0 Dec  4 04:14 cgroup.freeze
-rw-r--r--  1 root root 0 Dec  4 04:14 cgroup.max.depth
-rw-r--r--  1 root root 0 Dec  4 04:14 cgroup.max.descendants
-rw-r--r--  1 root root 0 Dec  4 04:14 cgroup.procs
-r--r--r--  1 root root 0 Dec  4 04:14 cgroup.stat
-rw-r--r--  1 root root 0 Nov 23 21:56 cgroup.subtree_control
-rw-r--r--  1 root root 0 Dec  4 04:14 cgroup.threads
-rw-r--r--  1 root root 0 Dec  4 04:14 cgroup.type
-rw-r--r--  1 root root 0 Dec  4 04:14 cpu.pressure
-r--r--r--  1 root root 0 Dec  4 04:14 cpu.stat
-rw-r--r--  1 root root 0 Dec  4 04:14 io.pressure
-r--r--r--  1 root root 0 Dec  4 04:14 memory.current
-r--r--r--  1 root root 0 Dec  4 04:14 memory.events
-r--r--r--  1 root root 0 Dec  4 04:14 memory.events.local
-rw-r--r--  1 root root 0 Dec  4 04:14 memory.high
-rw-r--r--  1 root root 0 Dec  4 04:14 memory.low
-rw-r--r--  1 root root 0 Dec  4 04:14 memory.max
-rw-r--r--  1 root root 0 Dec  4 04:14 memory.min
-rw-r--r--  1 root root 0 Dec  4 04:14 memory.oom.group
-rw-r--r--  1 root root 0 Dec  4 04:14 memory.pressure
-r--r--r--  1 root root 0 Dec  4 04:14 memory.stat
-r--r--r--  1 root root 0 Dec  4 04:14 pids.current
-r--r--r--  1 root root 0 Dec  4 04:14 pids.events
-rw-r--r--  1 root root 0 Dec  4 04:14 pids.max
drwxr-xr-x 11 root root 0 Dec  4 00:21 user-1001.slice

sudo ls -l /sys/fs/cgroup/

total 0
-r--r--r--  1 root root 0 Dec  4 04:17 cgroup.controllers
-rw-r--r--  1 root root 0 Dec  4 04:17 cgroup.max.depth
-rw-r--r--  1 root root 0 Dec  4 04:17 cgroup.max.descendants
-rw-r--r--  1 root root 0 Dec  4 04:17 cgroup.procs
-r--r--r--  1 root root 0 Dec  4 04:17 cgroup.stat
-rw-r--r--  1 root root 0 Nov 23 21:56 cgroup.subtree_control
-rw-r--r--  1 root root 0 Dec  4 04:17 cgroup.threads
-rw-r--r--  1 root root 0 Dec  4 04:17 cpu.pressure
-r--r--r--  1 root root 0 Dec  4 04:17 cpuset.cpus.effective
-r--r--r--  1 root root 0 Dec  4 04:17 cpuset.mems.effective
drwxr-xr-x  2 root root 0 Nov  3 19:34 init.scope
-rw-r--r--  1 root root 0 Dec  4 04:17 io.cost.model
-rw-r--r--  1 root root 0 Dec  4 04:17 io.cost.qos
-rw-r--r--  1 root root 0 Dec  4 04:17 io.pressure
-rw-r--r--  1 root root 0 Dec  4 04:17 memory.pressure
drwxr-xr-x 60 root root 0 Dec  4 04:09 system.slice
drwxr-xr-x  3 root root 0 Dec  4 00:36 user.slice

sudo ls -l /sys/fs/cgroup/systemd/user.slice

ls: cannot access '/sys/fs/cgroup/systemd/user.slice': No such file or directory

sudo ls -l /sys/fs/cgroup/systemd/user.slice/user-1001.slice

ls: cannot access '/sys/fs/cgroup/systemd/user.slice/user-1001.slice': No such file or directory

sudo ls -l /sys/fs/cgroup/systemd/user.slice/user-1001.slice/user.slice

ls: cannot access '/sys/fs/cgroup/systemd/user.slice/user-1001.slice/user.slice': No such file or directory

@zhangguanzhang
Copy link
Collaborator

please show the result @Lordryte

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service

sudo ls -l  /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/

sudo ls -l  /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/

sudo ls -l  /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/container/

@Lordryte
Copy link
Author

Lordryte commented Dec 4, 2020

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice

total 0
-r--r--r--  1 root   root   0 Dec  4 23:46 cgroup.controllers
-r--r--r--  1 root   root   0 Nov 23 21:56 cgroup.events
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.freeze
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.max.depth
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.max.descendants
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.procs
-r--r--r--  1 root   root   0 Dec  4 23:46 cgroup.stat
-rw-r--r--  1 root   root   0 Nov 23 21:56 cgroup.subtree_control
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.threads
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.type
-rw-r--r--  1 root   root   0 Dec  4 23:46 cpu.pressure
-r--r--r--  1 root   root   0 Dec  4 23:46 cpu.stat
-rw-r--r--  1 root   root   0 Dec  4 23:46 io.pressure
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.current
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.events
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.events.local
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.high
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.low
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.max
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.min
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.oom.group
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.pressure
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.stat
-r--r--r--  1 root   root   0 Dec  4 23:46 pids.current
-r--r--r--  1 root   root   0 Dec  4 23:46 pids.events
-rw-r--r--  1 root   root   0 Dec  4 23:46 pids.max
drwxr-xr-x  2 root   root   0 Nov 14 01:36 session-2108.scope
drwxr-xr-x  2 root   root   0 Nov 14 01:40 session-2112.scope
drwxr-xr-x  2 root   root   0 Nov 14 02:42 session-2118.scope
drwxr-xr-x  2 root   root   0 Nov 14 02:56 session-2123.scope
drwxr-xr-x  2 root   root   0 Nov 20 15:10 session-2803.scope
drwxr-xr-x  2 root   root   0 Dec  4 23:32 session-4270.scope
drwxr-xr-x  2 root   root   0 Dec  4 23:40 session-4273.scope
drwxr-xr-x  2 root   root   0 Oct 30 16:06 session-7.scope
drwxr-xr-x 21 podman podman 0 Dec  4 23:45 user@1001.service
drwxr-xr-x  2 root   root   0 Nov  3 19:34 user-runtime-dir@1001.service

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service

total 0
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 boot.mount
-r--r--r--  1 root   root   0 Dec  4 23:46 cgroup.controllers
-r--r--r--  1 root   root   0 Nov 23 21:56 cgroup.events
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.freeze
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.max.depth
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.max.descendants
-rw-r--r--  1 podman podman 0 Oct 30 16:05 cgroup.procs
-r--r--r--  1 root   root   0 Dec  4 23:46 cgroup.stat
-rw-r--r--  1 podman podman 0 Dec  4 23:45 cgroup.subtree_control
-rw-r--r--  1 podman podman 0 Oct 30 16:05 cgroup.threads
-rw-r--r--  1 root   root   0 Dec  4 23:46 cgroup.type
-rw-r--r--  1 root   root   0 Dec  4 23:46 cpu.pressure
-r--r--r--  1 root   root   0 Dec  4 23:46 cpu.stat
drwxr-xr-x  2 podman podman 0 Oct 30 16:06 dbus.service
drwxr-xr-x  2 podman podman 0 Oct 30 16:05 dbus.socket
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 dev-hugepages.mount
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 dev-mqueue.mount
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 dirmngr.socket
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 gpg-agent-browser.socket
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 gpg-agent-extra.socket
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 gpg-agent.socket
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 gpg-agent-ssh.socket
drwxr-xr-x  2 podman podman 0 Oct 30 16:05 init.scope
-rw-r--r--  1 root   root   0 Dec  4 23:46 io.pressure
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.current
-r--r--r--  1 root   root   0 Nov 23 21:56 memory.events
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.events.local
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.high
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.low
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.max
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.min
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.oom.group
-rw-r--r--  1 root   root   0 Dec  4 23:46 memory.pressure
-r--r--r--  1 root   root   0 Dec  4 23:46 memory.stat
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 -.mount
-r--r--r--  1 root   root   0 Dec  4 23:46 pids.current
-r--r--r--  1 root   root   0 Dec  4 23:46 pids.events
-rw-r--r--  1 root   root   0 Dec  4 23:46 pids.max
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 run-user-1001.mount
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 swapfile.swap
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 sys-fs-fuse-connections.mount
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 sys-kernel-config.mount
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 sys-kernel-debug.mount
drwxr-xr-x  2 podman podman 0 Nov  3 02:40 sys-kernel-tracing.mount
drwxr-xr-x 20 podman podman 0 Dec  4 01:57 user.slice

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/

total 0
-r--r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.controllers
-r--r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.events
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.freeze
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.max.depth
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.max.descendants
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.procs
-r--r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.stat
-rw-r--r-- 1 podman podman 0 Dec  4 23:45 cgroup.subtree_control
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.threads
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cgroup.type
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 cpu.pressure
-r--r--r-- 1 podman podman 0 Oct 30 16:06 cpu.stat
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 io.pressure
drwxr-xr-x 3 podman podman 0 Dec  4 00:58 libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope
drwxr-xr-x 3 podman podman 0 Dec  4 00:58 libpod-3306fdd994c1cb91a40561734b731d8b8bc281f6fe0eb5061a961a3a640ba4d0.scope
drwxr-xr-x 3 podman podman 0 Dec  4 00:58 libpod-342b84e517926a4b99feee18e3b3690b4c4c0039ad66509edecf1fbd82e2dc8f.scope
drwxr-xr-x 3 podman podman 0 Dec  4 00:58 libpod-45bdd1c897c3a3350b5c3f34ffbf0d9463b155f8084e18ddb1dd5e032ca1867d.scope
drwxr-xr-x 3 podman podman 0 Dec  4 00:58 libpod-4eea7a88f93c58e6b3a1064a44c6c5c8ddcb864276344d13fbfc59c6e74fe572.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 libpod-conmon-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 libpod-conmon-3306fdd994c1cb91a40561734b731d8b8bc281f6fe0eb5061a961a3a640ba4d0.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 libpod-conmon-342b84e517926a4b99feee18e3b3690b4c4c0039ad66509edecf1fbd82e2dc8f.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 libpod-conmon-45bdd1c897c3a3350b5c3f34ffbf0d9463b155f8084e18ddb1dd5e032ca1867d.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 libpod-conmon-4eea7a88f93c58e6b3a1064a44c6c5c8ddcb864276344d13fbfc59c6e74fe572.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 libpod-conmon-da0144621ad38781e7724897473cc23dbc4c58d340e1cbfd5ac75685a5057515.scope
drwxr-xr-x 3 podman podman 0 Dec  4 00:58 libpod-da0144621ad38781e7724897473cc23dbc4c58d340e1cbfd5ac75685a5057515.scope
-r--r--r-- 1 podman podman 0 Oct 30 16:06 memory.current
-r--r--r-- 1 podman podman 0 Oct 30 16:06 memory.events
-r--r--r-- 1 podman podman 0 Oct 30 16:06 memory.events.local
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 memory.high
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 memory.low
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 memory.max
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 memory.min
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 memory.oom.group
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 memory.pressure
-r--r--r-- 1 podman podman 0 Oct 30 16:06 memory.stat
-r--r--r-- 1 podman podman 0 Oct 30 16:06 pids.current
-r--r--r-- 1 podman podman 0 Oct 30 16:06 pids.events
-rw-r--r-- 1 podman podman 0 Oct 30 16:06 pids.max
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 podman-3451256.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 podman-3451312.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 podman-3451374.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 podman-3451437.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 podman-3451517.scope
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 podman-3451578.scope

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/

total 0
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.controllers
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.events
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.freeze
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.max.depth
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.max.descendants
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.procs
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.stat
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.subtree_control
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.threads
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.type
drwxr-xr-x 2 podman podman 0 Dec  4 00:58 container
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cpu.pressure
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cpu.stat
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 io.pressure
-r--r--r-- 1 podman podman 0 Dec  4 00:58 memory.current
-r--r--r-- 1 podman podman 0 Dec  4 00:58 memory.events
-r--r--r-- 1 podman podman 0 Dec  4 00:58 memory.events.local
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.high
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.low
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.max
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.min
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.oom.group
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.pressure
-r--r--r-- 1 podman podman 0 Dec  4 00:58 memory.stat
-r--r--r-- 1 podman podman 0 Dec  4 00:58 pids.current
-r--r--r-- 1 podman podman 0 Dec  4 00:58 pids.events
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 pids.max

sudo ls -l /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/container/

total 0
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.controllers
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.events
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.freeze
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.max.depth
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.max.descendants
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.procs
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.stat
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.subtree_control
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.threads
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cgroup.type
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 cpu.pressure
-r--r--r-- 1 podman podman 0 Dec  4 00:58 cpu.stat
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 io.pressure
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 memory.pressure
-r--r--r-- 1 podman podman 0 Dec  4 00:58 pids.current
-r--r--r-- 1 podman podman 0 Dec  4 00:58 pids.events
-rw-r--r-- 1 podman podman 0 Dec  4 00:58 pids.max

@Lordryte
Copy link
Author

Additional info: If I run the container inside a pod and I use: podman pod stats , it is working properly.

@mheon
Copy link
Member

mheon commented Dec 15, 2020

Appears that the container scope, which I believe is made by crun, is missing a lot of controllers? @giuseppe Any ideas?

@giuseppe
Copy link
Member

can you show me the content of /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/container/cgroup.controllers ?

Some of the cgroup controllers might not be enabled for the unprivileged user

@Lordryte
Copy link
Author

cat /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/libpod-2504b790675718f38df14eb544badb0ec8df966dc81298329067fa5e0eb6d27b.scope/container/cgroup.controllers:
pids

@giuseppe
Copy link
Member

thanks for confirming it. It looks like only the pids controller is availavle to the rootless user, not memory.

@danopia
Copy link

danopia commented Dec 20, 2020

Hello, I just upgraded from podman 2.0.4 to 2.2.1 on a Fedora 32 server and encountered the same error message with the stats commands, however I am running rootful.

dan@ausbox ~> sudo podman stats
Error: unable to obtain cgroup stats: open /sys/fs/cgroup/machine.slice/machine-libpod_pod_526f3efd57686a81d04a0f369b56273ec065a10d2d8cc7920db94e4ac6213b9f.slice/libpod-0fb4fc9a79fb3bd86a260459b292a9691795cfcb71c232ed40cc67b936dfb92c.scope/container/memory.current: no such file or directory

dan@ausbox ~> sudo podman pod stats
Error: unable to obtain cgroup stats: open /sys/fs/cgroup/machine.slice/machine-libpod_pod_19a857a7326218d14ea5907d66863236a9e0561c616dfe6c99d8b76a59831323.slice/libpod-2db757e2f42d5afb0d0ac2e5822f7ba2f791100fa0b0dd43813bfd828b90ae07.scope/container/pids.current: no such file or directory

# what's actually there:
dan@ausbox ~> ls /sys/fs/cgroup/machine.slice/machine-libpod_pod_19a857a7326218d14ea5907d66863236a9e0561c616dfe6c99d8b76a59831323.slice/libpod-2db757e2f42d5afb0d0ac2e5822f7ba2f791100fa0b0dd43813bfd828b90ae07.scope/container/
cgroup.controllers  cgroup.events  cgroup.freeze  cgroup.max.depth  cgroup.max.descendants  cgroup.procs  cgroup.stat  cgroup.subtree_control  cgroup.threads  cgroup.type  cpu.pressure  cpu.stat  io.pressure  memory.pressure

After some investigation, I found that also upgrading crun (from 0.14 to 0.16) and then again recreating all pods/containers fixed the stats commands. So it seems like an outdated crun might be one source of this error message.

Maybe missing controllers could be handled better than completely crashing the command?

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

rhatdan added a commit to rhatdan/podman that referenced this issue Jan 27, 2021
It is fairly common for certain cgroups controllers to
not be enabled on a system.  We should Warn when this happens
versus failing, when doing podman stats command.  This way users
can get information from the other controllers.

Fixes: containers#8588

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
mheon pushed a commit to mheon/libpod that referenced this issue Feb 4, 2021
It is fairly common for certain cgroups controllers to
not be enabled on a system.  We should Warn when this happens
versus failing, when doing podman stats command.  This way users
can get information from the other controllers.

Fixes: containers#8588

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants