Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while removing network: network <netwok_name> id <network_id> has active endpoints #5243

Open
sydevinfra opened this issue Jul 8, 2024 · 2 comments

Comments

@sydevinfra
Copy link

sydevinfra commented Jul 8, 2024

Description

Sometimes a network fails to be removed with the following error :

$ docker network rm s_web
Error response from daemon: error while removing network: network s_web id 6b63ee4e95208b6869d072ef45803dbd2b9a517b03b1679c586814a66552f037 has active endpoints

Running docker inspect s_web shows no containers attached. In fact there are no containers at all.

$ docker network inspect s_web
[
    {
        "Name": "s_web",
        "Id": "6b63ee4e95208b6869d072ef45803dbd2b9a517b03b1679c586814a66552f037",
        "Created": "2024-07-08T11:17:24.896578195+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.252.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "web",
            "com.docker.compose.project": "s",
            "com.docker.compose.version": "2.28.1"
        }
    }
]
$ docker container ls -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

The only way to delete the network is to restart the docker service and then run the delete command again.
This error appears randomly, so it is fairly hard to reproduce willingly.

Reproduce

docker compose up # compose.yaml in additional info
docker compose down --remove-orphans -v
docker network rm s_web

Expected behavior

The network should be removed

docker version

Client: Docker Engine - Community
 Version:           27.0.3
 API version:       1.46
 Go version:        go1.21.11
 Git commit:        7d4bcd8
 Built:             Sat Jun 29 00:04:07 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.0.3
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.11
  Git commit:       662f78c
  Built:            Sat Jun 29 00:02:31 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.18
  GitCommit:        ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
 runc:
  Version:          1.7.18
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
 rootlesskit:
  Version:          2.0.2
  ApiVersion:       1.1.1
  NetworkDriver:    slirp4netns
  PortDriver:       builtin
  StateDir:         /run/user/1000/dockerd-rootless
 slirp4netns:
  Version:          1.2.3
  GitCommit:        c22fde291bb35b354e6ca44d13be181c76a0a432

docker info

Client: Docker Engine - Community
 Version:    27.0.3
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.15.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.28.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 18
 Server Version: 27.0.3
 Storage Driver: fuse-overlayfs
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
 runc version: v1.1.13-0-g58aa920
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  rootless
  cgroupns
 Kernel Version: 5.14.0-427.22.1.el9_4.x86_64
 Operating System: Rocky Linux 9.4 (Blue Onyx)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 11.43GiB
 Name: <redacted>
 ID: 07ebe6ee-777a-47e0-b6b7-d2ada901fe0e
 Docker Root Dir: /home/devroot/.local/share/docker
 Debug Mode: false
 Username: <redacted>
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional Info

Running docker rootless on Rocky linux.
Same problem seen on multiple servers
Using it as CICD runner, the network is created with docker compose up, and should be removed with docker compose down --remove-orphans -v

Here is the compose.yaml with some redacted values :

services:
  php:
    image: <redacted_some_private_symfony_php_image>
    working_dir: /home/app
    volumes:
      - .:/home/app
      - ~/.ssh:/home/.ssh
      - ./docker/php/conf.d/app.ini:/usr/local/etc/php/conf.d/app.ini:ro
    networks:
      - web
    external_links:
      - nginx-proxy:mercure.docker
    environment:
        - XDEBUG_CONFIG=client_host=host.docker.internal client_port=9000
        - PHP_IDE_CONFIG=serverName=php-docker

  nginx:
    image: nginx:1.23-alpine
    volumes:
      - .:/home/app
      - ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./docker/nginx/sites/:/etc/nginx/sites-available
      - ./docker/nginx/conf/mime.types:/etc/nginx/mime.types
    networks:
      web:
        aliases:
          - nginx.docker
          - api.nginx.docker
    links:
      - php

  nginx-proxy:
    image: jwilder/nginx-proxy
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./docker/nginx-proxy/cert/:/etc/nginx/certs/
      - ./docker/nginx-proxy/nginx.conf:/etc/nginx/nginx.conf
    networks:
      - web

  redis:
    image: redis:7-alpine
    networks:
      - web

  cypress:
    image: cypress/base:latest
    working_dir: /home/app/front
    volumes:
      - .:/home/app
      - cypress:/root/.cache/Cypress/
    networks:
      - web
    external_links:
      - nginx-proxy:nginx.docker
      - nginx-proxy:api.nginx.docker

  database:
    image: postgres:14.5-alpine
    user: root
    environment:
      - POSTGRES_PASSWORD= <redacted>
      - PGDATA=/data/postgres
    ports:
      - "5432:5432"
    networks:
      - web
    volumes:
      - database-data:/data/postgres

  azurite:
    image: mcr.microsoft.com/azure-storage/azurite
    command: "azurite --loose --blobHost 0.0.0.0 --blobPort 10000 --queueHost 0.0.0.0 --queuePort 10001 --location /workspace --debug /workspace/debug.log"
    ports:
      - "10010:10000"
      - "10011:10001"
      - "10012:10002"
    volumes:
      - azurite:/workspace
    networks:
      - web

  elasticsearch:
    build: docker/elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # 512mo HEAP
      - ELASTIC_PASSWORD=<redacted>
    ports:
      - 9200:9200
    networks:
      web:
        aliases:
          - elasticsearch.docker

  kibana:
    image: docker.elastic.co/kibana/kibana:8.4.3
    environment:
      ELASTICSEARCH_URL: http://elasticsearch.docker:9200
    depends_on:
      - elasticsearch
    ports:
      - 5601:5601
    volumes:
      - elasticsearch-data:/usr/share/kibana/data
    networks:
      web:
        aliases:
          - kibana.docker

  mercure:
    image: dunglas/mercure
    restart: unless-stopped
    environment:
      SERVER_NAME: ':80'
      MERCURE_PUBLISHER_JWT_KEY: <redacted>
      MERCURE_SUBSCRIBER_JWT_KEY: <redacted>
      MERCURE_EXTRA_DIRECTIVES: |
        cors_origins *
    command: /usr/bin/caddy run --config /etc/caddy/dev.Caddyfile
    volumes:
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - web

  rabbitmq:
    image: rabbitmq:3-management
    networks:
      web:
        aliases:
          - rabbitmq.docker

  smtp:
    image: schickling/mailcatcher
    environment:
      VIRTUAL_PORT: 1080
    networks:
      - web

networks:
  web:
    ipam:
      config:
        - subnet: "172.252.0.0/16"

volumes:
  database-data:
  cypress:
  elasticsearch-data:
  pgadmin-data:
  caddy_data:
  caddy_config:
  azurite:
@vvoland
Copy link
Collaborator

vvoland commented Jul 11, 2024

@akerouanton @robmry PTAL

@sydevinfra
Copy link
Author

@FabienPapet FYI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants