-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"sending tarball" takes a long time even when the image already exists #107
Comments
There should theoretically be a way to do it even for partial matches. A problem though is that there is no guarantee that the image is still in docker when the build finishes up. And if it is deleted before build completes you could get an error(on tag or on uploading layer on partial matches). So maybe needs an opt-in with a flag(at least until there isn't a special incremental load endpoint in the docker api). |
Yes I agree, an opt-in flag would be best. Thanks. |
I would like to add a data point and a reproducible example for this problem.
|
Our use case also suffers from the time it takes to export to OCI image format/sending tarball. We end up sticking to DOCKER_BUILDKIT=1 docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
-t test .
Instead of using docker buildx build \
--cache-to type=inline \
--builder builder \
--load \
-t test .
|
In my docker-compose project I'm getting "sending tarball" times of almost 2 minutes, when the entire build is cached. Makes the development experience so painful that I'm considering setting up the services outside of Docker to avoid this.
|
Same here. Incredibly painful building even not so large projects that should take seconds. |
In case it's relevant to anyone - if you're using docker-for-mac then there's this issue about slow performance saving/loading tarballs that might be affecting you (AFAIK buildx As you can see, there's hopefully a fix for it in the next release. In the meantime a workaround is to disable the |
"Sending tarball" means you are running the build inside a container(or k8s or remote instance). While these are powerful modes (eg. for multi-platform) if you want to run the image you just built with local Docker, it needs to be transferred to Docker first. If your workflow is to build and then run in Docker all the time, then you should build with a Docker driver on buildx, because that driver does not have the "sending tarball" phase to make the result available as local Docker image. You can read more about the drivers at https://github.com/docker/buildx/blob/master/docs/manuals/drivers/index.md Latest proposal for speeding up the loading phase for other drivers moby/moby#44369 |
@tonistiigi a-ha! that was it. At some point I used |
Hello. We use
But as far as I see, in #1813 it should be addressed for the |
@tonistiigi But that will mean the user foregoes the advantages of the other build drivers. The issue is with the performance on sending tarballs. |
As mentioned in #626, moby/moby#44369 is the docker engine-side requirement for this feature. |
Hello, I have encountered the same issue, but only when building images from Windows. Using the same command inside WSL works fine. My setup is the following:
Both Windows and WSL Docker CLIs are using the same endpoint to connect to the singleton Podman server instance (in fact, I can see the same image and container set on both sides). When I launch the following command inside WSL, it works fine: However, launching the same command from Windows Powershell, it stucks indefinitely on the "sending tarball" step. /update: it seems an issue related to Powershell only. Running again the same command inside the old Windows CMD works fine as well. |
I was experiencing the issue when building from MacOS... |
When I build an image which already exists (because of a previous build on the same engine with 100% cache hit), the builder still spends a lot of time in "sending tarball". This causes a noticeable delay in the build. Perhaps this delay could be optimized away in the case of 100% cache hit?
For example, when building a 1.84GB image with 51 layers, the entire build is 9s, of which 8s is in "sending tarball" (see output below).
It would be awesome if fully cached builds returned at near-interactive speed!
The text was updated successfully, but these errors were encountered: