Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman save not saving multiple images #2669

Closed
Prodian0013 opened this issue Mar 15, 2019 · 32 comments · Fixed by #6811
Closed

podman save not saving multiple images #2669

Prodian0013 opened this issue Mar 15, 2019 · 32 comments · Fixed by #6811
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@Prodian0013
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug

Description
When executing a podman save with multiple images only one image is saved.

Steps to reproduce the issue:

  1. podman save > images.tar docker.io/busybox:1.27.2 docker.io/metallb/controller:v0.3.1 docker.io/metallb/speaker:v0.3.1

  2. podman import images.tar

  3. podman images

Describe the results you received:
Only the first image is saved and imported.

Describe the results you expected:
All three images should be saved and imported.

The documentation states podman save is the equivalent to docker save. Where as docker save can actually save multiple images at once.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.1.2

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.10.2
  podman version: 1.1.2
host:
  BuildahVersion: 1.7.1
  Conmon:
    package: podman-1.1.2-2.git0ad9b6b.el7.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.14.0-dev, commit: 6e07c13bf86885ba6d71fdbdff90f436e18abe39-dirty'
  Distribution:
    distribution: '"centos"'
    version: "7"
  MemFree: 96034816
  MemTotal: 3973865472
  OCIRuntime:
    package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.0'
  SwapFree: 2147217408
  SwapTotal: 2147479552
  arch: amd64
  cpus: 2
  hostname: controller.lan
  kernel: 3.10.0-957.5.1.el7.x86_64
  os: linux
  rootless: false
  uptime: 59m 48.22s
insecure registries:
  registries: []
registries:
  registries:
  - registry.access.redhat.com
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 28
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 15, 2019
@rhatdan
Copy link
Member

rhatdan commented Mar 15, 2019

@haircommander PTAL

@afbjorklund
Copy link
Contributor

I got bitten by this bug as well. Saved multiple entries, all pointing to the same image...

Guess I will use multiple files as a workaround, but would be good if it could be fixed ?

@rhatdan
Copy link
Member

rhatdan commented Apr 13, 2019

@haircommander Any update on this?

1 similar comment
@baude
Copy link
Member

baude commented May 29, 2019

@haircommander Any update on this?

@haircommander
Copy link
Collaborator

still waiting on containers/image#610 . I don't have time currently to tackle it on c/image side

@afbjorklund
Copy link
Contributor

My workaround was to use one tarball for each image, and a for loop to load them one-by-one.

find ... -type f | xargs -n 1 podman load -i

@rhatdan
Copy link
Member

rhatdan commented Aug 5, 2019

@haircommander any update on this?

@haircommander
Copy link
Collaborator

Unfortunately not, I am not sure I have the bandwidth to tackle it both here and in c/image.

@rhatdan
Copy link
Member

rhatdan commented Aug 5, 2019

Ok @vrothberg You want to grab it?

@vrothberg
Copy link
Member

Let's move discussion over to containers/image#610. I add it to my backlog but I can't commit to a schedule at the moment.

@github-actions
Copy link

github-actions bot commented Nov 4, 2019

This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.

@rhatdan
Copy link
Member

rhatdan commented Nov 4, 2019

This is still a good issue, not sure when we can get someone to work on it?
@mtrmac @vrothberg PTAL

@YuLimin
Copy link

YuLimin commented Jan 16, 2020

podman version

Version:            1.4.4
RemoteAPI Version:  1
Go Version:         go1.10.3
OS/Arch:            linux/amd64

podman info --debug

debug:
  compiler: gc
  git commit: ""
  go version: go1.10.3
  podman version: 1.4.4
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: podman-1.4.4-4.el7.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 0.3.0, commit: unknown'
  Distribution:
    distribution: '"rhel"'
    version: "7.7"
  MemFree: 4073623552
  MemTotal: 8350715904
  OCIRuntime:
    package: containerd.io-1.2.10-3.2.el7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 12
  hostname: registry
  kernel: 3.10.0-1062.el7.x86_64
  os: linux
  rootless: false
  uptime: 56m 35.65s
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 2
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

docker version

Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:25:41 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:24:18 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

docker save multiple images work fine.

IDS=$(docker images | awk '{if ($1 ~ /^(registry)/) print $3}')
echo $IDS
docker save $IDS -o xxx.tar

podman save multiple images work failed.

IDS=$(podman images | awk '{if ($1 ~ /^(registry)/) print $3}')
echo $IDS
podman save $IDS -o xxx.tar

@vrothberg
Copy link
Member

This is still a good issue, not sure when we can get someone to work on it?
@mtrmac @vrothberg PTAL

I fear we have other tasks with higher priority on our tables at the moment. Let's move discussion over to containers/image#610.

@rhatdan
Copy link
Member

rhatdan commented Feb 18, 2020

Sadly no progress.

@marcfiu
Copy link

marcfiu commented Mar 30, 2020

I am trying to champion podman over docker; however, colleagues use "docker save" to air gap images to a docker-archive tarball. This translates into "podman" not being a drop-in replacement "docker", which is suboptimal. At the least, "podman save" should error out indicating "can't do multiple images" rather than silently succeeding with something that is simply not equivalent to "docker save".

@vrothberg
Copy link
Member

Thanks for the feedback, @marcfiu! Throwing an error is definitely better.

@vrothberg
Copy link
Member

I opened #5659 to address your feedback. Note that I didn't error out but log when more than one argument is passed as I don't want others to regress on the current behavior.

@vrothberg
Copy link
Member

@baude, afaiu this issue should get some attention soon, right?

@github-actions
Copy link

github-actions bot commented Jul 5, 2020

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 6, 2020

This is being worked on .

@github-actions
Copy link

github-actions bot commented Aug 6, 2020

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Aug 6, 2020

Most of the work is happening in containers/image right now.

@ncapps
Copy link

ncapps commented Feb 16, 2021

Is this issue still being worked? We would like the ability to load a tar file containing multiple images.

@mheon
Copy link
Member

mheon commented Feb 16, 2021

It's been resolved since Podman 2.2.0

@afages
Copy link

afages commented May 5, 2021

I'm sorry but this is not resolved (or regression occured in 2.2.1).
Running a RHEL8 with Podman 2.2.1 and should I use "docker save -o", "podman save -o", "docker/podman >" only the first image is included in the .tar file. Whether I use 2 images or more.

@rhatdan
Copy link
Member

rhatdan commented May 5, 2021

Could you check it against podman 3.*, which should be in RHEL8.4 which is about to release.

@vrothberg
Copy link
Member

It works when specifying the -m flag. We had to remain backwards compatible with older Podmans, so users have to opt-in to saving multiple image archives.

@rhatdan
Copy link
Member

rhatdan commented May 6, 2021

Should we add a containers.conf value for this?

@vrothberg
Copy link
Member

Should we add a containers.conf value for this?

Very good thought! We actually implemented that already. The default can be changed via multi_image_archive=true in /etc/containers/containers.conf.

rhatdan added a commit to rhatdan/podman that referenced this issue May 6, 2021
We probably should put a whole bunch of other documentation in man
pages about containers.conf, but let's settle on this description
before we go add other docs.

Helps with: containers#2669

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@podmod001
Copy link

podmod001 commented May 21, 2022

It works when specifying the -m flag. We had to remain backwards compatible with older Podmans, so users have to opt-in to saving multiple image archives.

@vrothberg
I strongly suggest to display some kind of warning or error with docker-archives, when specifying multiple images without using the -m option.

It is very confusing that the command seems runs successfully without -m, but triggers an endless loop of runtime errors, when actually trying to start containers.

I needed to invest an hour to realize, there is an extra option for multiple images. See https://unix.stackexchange.com/questions/703355/how-to-do-offline-installation-of-multiple-images-via-podman-load-save for a reproduction.

Thanks for considering!

@vrothberg
Copy link
Member

@podmod001, thanks for reaching out.

It is very confusing that the command seems runs successfully without -m, but triggers an endless loop of runtime errors, when actually trying to start containers.

That sounds like a bug and is definitely not part of the design. Please open a new issue, ideally with a reproducer, if you want to see it fixed.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.