Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"sending tarball" takes a long time even when the image already exists #107

Open
shykes opened this issue Jul 19, 2019 · 15 comments
Open

"sending tarball" takes a long time even when the image already exists #107

shykes opened this issue Jul 19, 2019 · 15 comments
Labels
kind/enhancement New feature or request

Comments

@shykes
Copy link
Contributor

shykes commented Jul 19, 2019

When I build an image which already exists (because of a previous build on the same engine with 100% cache hit), the builder still spends a lot of time in "sending tarball". This causes a noticeable delay in the build. Perhaps this delay could be optimized away in the case of 100% cache hit?

For example, when building a 1.84GB image with 51 layers, the entire build is 9s, of which 8s is in "sending tarball" (see output below).

It would be awesome if fully cached builds returned at near-interactive speed!

 => [internal] load build definition from Dockerfile          0.0s
 => => transferring dockerfile: 2.53kB                        0.0s
 => [internal] load .dockerignore                             0.0s
 => => transferring context: 2B                               0.0s
 => [internal] load metadata for docker.io/library/alpine:la  1.0s
 => [1/51] FROM docker.io/library/alpine@sha256:6a92cd1fcdc8  0.0s
 => => resolve docker.io/library/alpine@sha256:6a92cd1fcdc8d  0.0s
 => CACHED [2/51] RUN apk update                              0.0s
 => CACHED [3/51] RUN apk add openssh                         0.0s
 => CACHED [4/51] RUN apk add bash                            0.0s
 => CACHED [5/51] RUN apk add bind-tools                      0.0s
 => CACHED [6/51] RUN apk add curl                            0.0s
 => CACHED [7/51] RUN apk add docker                          0.0s
 => CACHED [8/51] RUN apk add g++                             0.0s
 => CACHED [9/51] RUN apk add gcc                             0.0s
 => CACHED [10/51] RUN apk add git                            0.0s
 => CACHED [11/51] RUN apk add git-perl                       0.0s
 => CACHED [12/51] RUN apk add make                           0.0s
 => CACHED [13/51] RUN apk add python                         0.0s
 => CACHED [14/51] RUN apk add openssl-dev                    0.0s
 => CACHED [15/51] RUN apk add vim                            0.0s
 => CACHED [16/51] RUN apk add py-pip                         0.0s
 => CACHED [17/51] RUN apk add file                           0.0s
 => CACHED [18/51] RUN apk add groff                          0.0s
 => CACHED [19/51] RUN apk add jq                             0.0s
 => CACHED [20/51] RUN apk add man                            0.0s
 => CACHED [21/51] RUN cd /tmp && git clone https://github.c  0.0s
 => CACHED [22/51] RUN apk add go                             0.0s
 => CACHED [23/51] RUN apk add coreutils                      0.0s
 => CACHED [24/51] RUN apk add python2-dev                    0.0s
 => CACHED [25/51] RUN apk add python3-dev                    0.0s
 => CACHED [26/51] RUN apk add tar                            0.0s
 => CACHED [27/51] RUN apk add vim                            0.0s
 => CACHED [28/51] RUN apk add rsync                          0.0s
 => CACHED [29/51] RUN apk add less                           0.0s
 => CACHED [30/51] RUN pip install awscli                     0.0s
 => CACHED [31/51] RUN curl --silent --location "https://git  0.0s
 => CACHED [32/51] RUN curl https://dl.google.com/dl/cloudsd  0.0s
 => CACHED [33/51] RUN curl -L -o /usr/local/bin/kubectl htt  0.0s
 => CACHED [34/51] RUN curl -L -o /usr/local/bin/kustomize    0.0s
 => CACHED [35/51] RUN apk add ruby                           0.0s
 => CACHED [36/51] RUN apk add ruby-dev                       0.0s
 => CACHED [37/51] RUN gem install bigdecimal --no-ri --no-r  0.0s
 => CACHED [38/51] RUN gem install kubernetes-deploy --no-ri  0.0s
 => CACHED [39/51] RUN apk add npm                            0.0s
 => CACHED [40/51] RUN npm config set unsafe-perm true        0.0s
 => CACHED [41/51] RUN npm install -g yarn                    0.0s
 => CACHED [42/51] RUN npm install -g netlify-cli             0.0s
 => CACHED [43/51] RUN apk add libffi-dev                     0.0s
 => CACHED [44/51] RUN pip install docker-compose             0.0s
 => CACHED [45/51] RUN apk add mysql-client                   0.0s
 => CACHED [46/51] RUN (cd /tmp && curl -L -O https://releas  0.0s
 => CACHED [47/51] RUN apk add shadow sudo                    0.0s
 => CACHED [48/51] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL'  0.0s
 => CACHED [49/51] RUN useradd -G docker,wheel -m -s /bin/ba  0.0s
 => CACHED [50/51] RUN groupmod -o -g 999 docker              0.0s
 => CACHED [51/51] WORKDIR /home/sh                           0.0s
 => exporting to oci image format                             8.8s
 => => exporting layers                                       0.3s
 => => exporting manifest sha256:69088589c4e63094e51ae0e34e6  0.0s
 => => exporting config sha256:65db1e1d42a26452307b43bc5c683  0.0s
 => => sending tarball                                        8.3s
 => importing to docker                                       0.1s
@tonistiigi tonistiigi added the kind/enhancement New feature or request label Jul 23, 2019
@tonistiigi
Copy link
Member

There should theoretically be a way to do it even for partial matches. A problem though is that there is no guarantee that the image is still in docker when the build finishes up. And if it is deleted before build completes you could get an error(on tag or on uploading layer on partial matches). So maybe needs an opt-in with a flag(at least until there isn't a special incremental load endpoint in the docker api).

@shykes
Copy link
Contributor Author

shykes commented Jul 24, 2019

Yes I agree, an opt-in flag would be best. Thanks.

@vlad-ivanov-name
Copy link

I would like to add a data point and a reproducible example for this problem.

Dockerfile
ARG TAG
FROM tensorflow/tensorflow:${TAG}

Creating a builder with docker-container driver:

docker buildx create --name buildx-default --driver docker-container --bootstrap

Building a bunch of big images with buildx-default:

time bash -c 'for TAG in 2.8.0-gpu 2.7.1-gpu 2.7.0-gpu 2.6.0-gpu 2.4.3-gpu ; do docker buildx build --builder buildx-default --tag tf-test:${TAG} --build-arg TAG=${TAG} --load . ; done'
...
bash -c   132.93s user 29.20s system 190% cpu 1:25.31 total

Same but with default builder:

time bash -c 'for TAG in 2.8.0-gpu 2.7.1-gpu 2.7.0-gpu 2.6.0-gpu 2.4.3-gpu ; do docker buildx build --tag tf-test:${TAG} --build-arg TAG=${TAG} --load . ; done'
...
bash -c   0.34s user 0.20s system 7% cpu 7.535 total

That is a rather dramatic slow-down, especially when building many similar images. As far as I understand all this time is spent on serializing and deserializing data. Even if there's a hacky half-backed solution like an opt-in flag it would certainly be nice to have a way to optimize this

@agirault
Copy link

agirault commented Oct 20, 2022

Our use case also suffers from the time it takes to export to OCI image format/sending tarball.

We end up sticking to DOCKER_BUILDKIT:

DOCKER_BUILDKIT=1 docker build \
    --build-arg BUILDKIT_INLINE_CACHE=1 \
    -t test .
[+] Building 1.3s (28/28) FINISHED                                  
 => [internal] load build definition from Dockerfile           0.0s
 => => transferring dockerfile: 38B                            0.0s
 => [internal] load .dockerignore                              0.0s
 => => transferring context: 35B                               0.0s
 => resolve image config for docker.io/docker/dockerfile:1     0.8s
[...]
 => exporting to image                                         0.0s
 => => exporting layers                                        0.0s
 => => writing image sha256:d40374998f58b1491b57be3336fdc2793  0.0s
 => => naming to docker.io/library/test                        0.0s
 => exporting cache                                            0.0s
 => => preparing build cache for export                        0.0s

Instead of using buildx:

docker buildx build \
   --cache-to type=inline \
   --builder builder \
   --load \
   -t test .
[+] Building 33.5s (30/30) FINISHED                                 
=> [internal] load .dockerignore                              0.0s
=> => transferring context: 1.98kB                            0.0s
=> [internal] load build definition from Dockerfile           0.0s
=> => transferring dockerfile: 6.43kB                         0.0s
=> resolve image config for docker.io/docker/dockerfile:1     1.6s
[...]
=> preparing layers for inline cache                          0.1s
=> exporting to oci image format                             26.9s
=> => exporting layers                                        0.0s
=> => exporting manifest sha256:9b04583c6b0681e05222c3b61e59  0.0s
=> => exporting config sha256:e58cf81d847d829a7a94d6cfa57b29  0.0s
=> => sending tarball                                        26.9s
=> importing to docker                                        0.4s                    
[...]

@reuben
Copy link

reuben commented Nov 7, 2022

In my docker-compose project I'm getting "sending tarball" times of almost 2 minutes, when the entire build is cached. Makes the development experience so painful that I'm considering setting up the services outside of Docker to avoid this.

 => [docker-worker internal] load build definition from Dockerfile                              0.0s
 => => transferring dockerfile: 190B                                                            0.0s
 => [docker-worker internal] load .dockerignore                                                 0.0s
 => => transferring context: 2B                                                                 0.0s
 => [docker-worker internal] load metadata for docker.io/library/python:3.8                     0.9s
 => [docker-backend internal] load build definition from Dockerfile                             0.0s
 => => transferring dockerfile: 6.25kB                                                          0.0s
 => [docker-backend internal] load .dockerignore                                                0.0s
 => => transferring context: 2B                                                                 0.0s
 => [docker-backend internal] load metadata for docker.io/library/ubuntu:20.04                  0.8s
 => [docker-backend internal] load metadata for docker.io/library/node:14.19-bullseye-slim      0.8s
 => [docker-worker 1/5] FROM docker.io/library/python:3.8@sha256:blah                           0.0s
 => => resolve docker.io/library/python:3.8@sha256:blah                                         0.0s
 => [docker-worker internal] load build context                                                 0.1s
 => => transferring context: 43.51kB                                                            0.0s
 => [docker-backend gatsby  1/11] FROM docker.io/library/node:14.19-bullseye-slim@sha256:blah   0.0s
 => => resolve docker.io/library/node:14.19-bullseye-slim@sha256:blah                           0.0s
 => [docker-backend internal] load build context                                                0.1s
 => => transferring context: 72.58kB                                                            0.1s
 => [docker-backend with-secrets 1/6] FROM docker.io/library/ubuntu:20.04@sha256:blah           0.0s
 => => resolve docker.io/library/ubuntu:20.04@sha256:blah                                       0.0s
 => CACHED [docker-worker 2/5]                                                                  0.0s
 => CACHED [docker-worker 3/5]                                                                  0.0s
 => CACHED [docker-worker 4/5]                                                                  0.0s
 => CACHED [docker-worker 5/5]                                                                  0.0s
 => [docker-worker] exporting to oci image format                                             117.7s
 => => exporting layers                                                                         0.3s
 => => exporting manifest sha256:blah                                                           0.0s
 => => exporting config sha256:blah                                                             0.0s
 => => sending tarball                                                                        117.6s

@sillen102
Copy link

In my docker-compose project I'm getting "sending tarball" times of almost 2 minutes, when the entire build is cached. Makes the development experience so painful that I'm considering setting up the services outside of Docker to avoid this.

 => [docker-worker internal] load build definition from Dockerfile                              0.0s
 => => transferring dockerfile: 190B                                                            0.0s
 => [docker-worker internal] load .dockerignore                                                 0.0s
 => => transferring context: 2B                                                                 0.0s
 => [docker-worker internal] load metadata for docker.io/library/python:3.8                     0.9s
 => [docker-backend internal] load build definition from Dockerfile                             0.0s
 => => transferring dockerfile: 6.25kB                                                          0.0s
 => [docker-backend internal] load .dockerignore                                                0.0s
 => => transferring context: 2B                                                                 0.0s
 => [docker-backend internal] load metadata for docker.io/library/ubuntu:20.04                  0.8s
 => [docker-backend internal] load metadata for docker.io/library/node:14.19-bullseye-slim      0.8s
 => [docker-worker 1/5] FROM docker.io/library/python:3.8@sha256:blah                           0.0s
 => => resolve docker.io/library/python:3.8@sha256:blah                                         0.0s
 => [docker-worker internal] load build context                                                 0.1s
 => => transferring context: 43.51kB                                                            0.0s
 => [docker-backend gatsby  1/11] FROM docker.io/library/node:14.19-bullseye-slim@sha256:blah   0.0s
 => => resolve docker.io/library/node:14.19-bullseye-slim@sha256:blah                           0.0s
 => [docker-backend internal] load build context                                                0.1s
 => => transferring context: 72.58kB                                                            0.1s
 => [docker-backend with-secrets 1/6] FROM docker.io/library/ubuntu:20.04@sha256:blah           0.0s
 => => resolve docker.io/library/ubuntu:20.04@sha256:blah                                       0.0s
 => CACHED [docker-worker 2/5]                                                                  0.0s
 => CACHED [docker-worker 3/5]                                                                  0.0s
 => CACHED [docker-worker 4/5]                                                                  0.0s
 => CACHED [docker-worker 5/5]                                                                  0.0s
 => [docker-worker] exporting to oci image format                                             117.7s
 => => exporting layers                                                                         0.3s
 => => exporting manifest sha256:blah                                                           0.0s
 => => exporting config sha256:blah                                                             0.0s
 => => sending tarball                                                                        117.6s

Same here. Incredibly painful building even not so large projects that should take seconds.

@spmason
Copy link

spmason commented Nov 8, 2022

In case it's relevant to anyone - if you're using docker-for-mac then there's this issue about slow performance saving/loading tarballs that might be affecting you (AFAIK buildx --load is just using docker load under the hood): docker/for-mac#6346 (comment)

As you can see, there's hopefully a fix for it in the next release. In the meantime a workaround is to disable the virtualization.framework experimental feature

@tonistiigi
Copy link
Member

"Sending tarball" means you are running the build inside a container(or k8s or remote instance). While these are powerful modes (eg. for multi-platform) if you want to run the image you just built with local Docker, it needs to be transferred to Docker first. If your workflow is to build and then run in Docker all the time, then you should build with a Docker driver on buildx, because that driver does not have the "sending tarball" phase to make the result available as local Docker image.

You can read more about the drivers at https://github.com/docker/buildx/blob/master/docs/manuals/drivers/index.md

Latest proposal for speeding up the loading phase for other drivers moby/moby#44369

@reuben
Copy link

reuben commented Nov 10, 2022

@tonistiigi a-ha! that was it. At some point I used docker buildx create --use and that overwrote the default builder with the Docker driver. Doing docker buildx ls to find the builder with the Docker driver and then docker buildx use --default builder_name fixed it for me! No more sending tarball step.

@Felixoid
Copy link

Felixoid commented Aug 3, 2023

Hello. We use docker buildx build --push --output=type=image,push-by-digest=true, and it seems to have the same issue as mentioned here

Thu, 03 Aug 2023 17:42:11 GMT #15 exporting to image
Thu, 03 Aug 2023 17:42:11 GMT #15 exporting layers
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting layers 281.9s done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting manifest sha256:a46bfbdf8f2e24cbc0812f178cdf81704222f9924a52c9c90feeb971afc5f2ca 0.0s done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting config sha256:143c2a936b7a2cd51e108d683d6d6c7d4f7160e48543ca2977637cbb0829b848 done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting attestation manifest sha256:d336dfa04618341c715c5a10ac07eeda416b633cf15b83a39c28fba0d0662a43 0.0s done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting manifest list sha256:209725be101f2fe247081474b1057355dfbc1010de2581643d0a6c43e8dfda75
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting manifest list sha256:209725be101f2fe247081474b1057355dfbc1010de2581643d0a6c43e8dfda75 0.0s done

But as far as I see, in #1813 it should be addressed for the --output=docker. Does it mean that the same could be done to increase the speed in our case too?

@sanmai-NL
Copy link

@tonistiigi But that will mean the user foregoes the advantages of the other build drivers. The issue is with the performance on sending tarballs.

@jedevc
Copy link
Collaborator

jedevc commented Dec 19, 2023

As mentioned in #626, moby/moby#44369 is the docker engine-side requirement for this feature.

@HWiese1980
Copy link

HWiese1980 commented Aug 8, 2024

image

Something's off here...

/update: a restart of Docker Desktop solved the problem

@steromano87
Copy link

steromano87 commented Aug 21, 2024

Hello, I have encountered the same issue, but only when building images from Windows. Using the same command inside WSL works fine.

My setup is the following:

  • Podman Desktop 1.11.1 installed on Windows host
  • Docker CLI (version 27.1.1) installed on Windows as part of the Podman Desktop bundle (with buildx plugin)
  • docker-ce-cli package (version 27.1.2) installed on WSL (Ubuntu 24.04), with buildx plugin
  • I'm using a custom builder (with the docker-container driver) because my company Docker registry uses a custom CA certificate. No additional configuration has been added.

Both Windows and WSL Docker CLIs are using the same endpoint to connect to the singleton Podman server instance (in fact, I can see the same image and container set on both sides).

When I launch the following command inside WSL, it works fine:
docker build --platform linux/amd64 --load -t my-image:latest .

However, launching the same command from Windows Powershell, it stucks indefinitely on the "sending tarball" step.

/update: it seems an issue related to Powershell only. Running again the same command inside the old Windows CMD works fine as well.

@HWiese1980
Copy link

I was experiencing the issue when building from MacOS...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request
Projects
None yet
Development

No branches or pull requests