Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fails to filter containers by label #8860

Closed
yajo opened this issue Jan 1, 2021 · 5 comments · Fixed by #8862
Closed

Fails to filter containers by label #8860

yajo opened this issue Jan 1, 2021 · 5 comments · Fixed by #8862
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@yajo
Copy link

yajo commented Jan 1, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Filtering containers through labels doesn't work when using the API.

Steps to reproduce the issue:

  1. Search containers, filtered by label, using the API.

Describe the results you received:

> podman system service -t0 &
> export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
> docker container ls --filter label=com.docker.compose.project=doodba-devel-120 --all
CONTAINER ID        IMAGE                                              COMMAND                  CREATED             STATUS              PORTS               NAMES
3819837d4b5f        registry.fedoraproject.org/f33/fedora-toolbox:33   "toolbox --verbose i…"   5 weeks ago                                                 fedora-toolbox-33
8c733ccf289f        postgres:13-alpine                                 "psql --help"            8 weeks ago                                                 sweet_moore
c85edce8dcd0        postgres:13-alpine                                 "psql --host www.gru…"   8 weeks ago                                                 eloquent_lovelace
eb0bd81ec531        k8s.gcr.io/pause:3.2                               ""                       5 months ago                                                260391b458a6-infra
f6d112e7da3d        postgres:13-alpine                                 "psql --host www.gru…"   8 weeks ago                                                 magical_swanson

Note: none of those containers have the searched label.

Describe the results you expected:

I think the best way to describe this is to run the same command using podman directly. The command using the API should return the same results:

> podman container ls --filter label=com.docker.compose.project=doodba-devel-120 --all
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES

Additional information you deem important (e.g. issue happens only occasionally):

This problem breaks docker-compose.

When starting a project, if other containers exist, they will produce this warning and error:

> docker-compose up -d
Creating network "doodba-devel-120_default" with the default driver
Creating network "doodba-devel-120_public" with the default driver
WARNING: Found orphan containers (eloquent_lovelace, magical_swanson, sweet_moore, fedora-toolbox-33, 260391b458a6-infra) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Traceback (most recent call last):
  File "/var/home/yajo/.local/bin/docker-compose", line 8, in <module>
    sys.exit(main())
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/cli/main.py", line 67, in main
    command()
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/cli/main.py", line 126, in perform_command
    handler(command, command_options)
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/cli/main.py", line 1070, in up
    to_attach = up(False)
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/cli/main.py", line 1051, in up
    return self.project.up(
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/project.py", line 610, in up
    services = self.get_services_without_duplicate(
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/project.py", line 219, in get_services_without_duplicate
    service.remove_duplicate_containers()
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/service.py", line 679, in remove_duplicate_containers
    for c in self.duplicate_containers():
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/service.py", line 693, in duplicate_containers
    if c.number in numbers:
  File "/var/home/yajo/.local/pipx/venvs/docker-compose/lib64/python3.9/site-packages/compose/container.py", line 94, in number
    raise ValueError("Container {} does not have a {} label".format(
ValueError: Container eb0bd81ec531 does not have a com.docker.compose.container-number label

When bringing down the project, the fake warning also seems to be a symptom of this issue:

> docker-compose down
WARNING: Found orphan containers (sweet_moore, fedora-toolbox-33, magical_swanson, 260391b458a6-infra, eloquent_lovelace) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Removing network doodba-devel-120_default
Removing network doodba-devel-120_public

Output of podman version:

Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 14:37:50 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.21-3.fc33.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab'
  cpus: 12
  distribution:
    distribution: fedora
    version: "33"
  eventLogger: journald
  hostname: yajolap-tecnativa-com
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.9.16-200.fc33.x86_64
  linkmode: dynamic
  memFree: 1536356352
  memTotal: 16615817216
  ociRuntime:
    name: runc
    package: runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc92+dev
      commit: c9a9ce0286785bef3f3c3c87cd1232e535a03e15
      spec: 1.0.2-dev
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.fc33.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 12667576320
  swapTotal: 12670984192
  uptime: 9h 17m 6.35s (Approximately 0.38 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /var/home/yajo/.config/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 0
    stopped: 5
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-1.fc33.x86_64
      Version: |-
        fusermount3 version: 3.9.3
        fuse-overlayfs: version 1.3
        FUSE library version 3.9.3
        using FUSE kernel interface version 7.31
  graphRoot: /var/home/yajo/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 38
  runRoot: /run/user/1000/containers
  volumePath: /var/home/yajo/.local/share/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 1607438270
  BuiltTime: Tue Dec  8 14:37:50 2020
  GitCommit: ""
  GoVersion: go1.15.5
  OsArch: linux/amd64
  Version: 2.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

advertencia:Found bdb Packages database while attempting sqlite backend: using bdb backend.
podman-2.2.1-1.fc33.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical, fedora 33 silverblue.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 1, 2021
@Luap99
Copy link
Member

Luap99 commented Jan 1, 2021

It looks like this is not yet implemented:

// TODO filters still need to be applied

@Luap99 Luap99 self-assigned this Jan 1, 2021
@Luap99 Luap99 added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Jan 1, 2021
Luap99 pushed a commit to Luap99/libpod that referenced this issue Jan 1, 2021
Fixes containers#8860

Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
@Luap99
Copy link
Member

Luap99 commented Jan 1, 2021

PR #8862

@BrianSidebotham
Copy link

Just got bitten by this on Fedora 33. :(

@yajo
Copy link
Author

yajo commented Jan 12, 2021

Yes, it seems fixed, but not yet released. Is there any schedule for the next fix release? 🤔

@Luap99
Copy link
Member

Luap99 commented Jan 12, 2021

The next release is 3.0. The first rc will be released this week.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants