Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: We should be able to complete a build in 3 hours even without Docker image cache #49278

Open
kennytm opened this issue Mar 22, 2018 · 20 comments
Labels
C-enhancement Category: An issue proposing an enhancement or a PR with one. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) T-infra Relevant to the infrastructure team, which will review and decide on the PR/issue.

Comments

@kennytm
Copy link
Member

kennytm commented Mar 22, 2018

Our CI builders (except macOS and Windows) use Docker, and we'll cache the Docker repository on Travis. Thanks to the cache, normally the docker build command only takes a few seconds to complete. However, when the cache is invalidated for whatever reason, the Docker image will need to be actually built, and this may take a very long time.

Recently this happened with #49246 — the Docker image cache of dist-x86_64-linux alt became stale and thus needs to be built from scratch. One of the step involves compiling GCC. The whole docker build command thus takes over 40 minutes. Worse, the alt builders have assertions enabled, and thus all stage1+ rustc invocations are slower than their normal counterpart. Together, it is impossible to complete within 3 hours. Travis will not update the cache unless the build is successful. Therefore, I need to exclude RLS, Rustfmt and Clippy from the distribution, to ensure the job is passing.

I don't think we should entirely rely on Travis's cache for speed. Ideally, the docker build command should at most spend 10 minutes, assuming good network speed (~2 MB/s on Travis) and reasonable CPU performance (~2.3 GHz × 4 CPUs on Travis).

In the dist-x86_64-linux alt case, if we host the precompiled GCC 4.8.5 for Centos 5, we could have trimmed 32 minutes out of the Docker build time, which allows us to complete the build without removing anything.

@kennytm kennytm added C-enhancement Category: An issue proposing an enhancement or a PR with one. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) T-infra Relevant to the infrastructure team, which will review and decide on the PR/issue. labels Mar 22, 2018
@alexcrichton
Copy link
Member

I think the worst offenders here are definitely the dist images where we're commonly building entire gcc toolchains for older compat and whatnot. One of the major pros, I think, is that we can test out changes to toolchains before they land. That is, if you update the toolchain in the CentOS image (for whatever reason) it'll get automatically rebuilt and the PR is rejected if it causes a failure. A downside with precompiled binaries is that we typically don't know if they work, so we'd have to run the precompiled build multiple times.

That being said I think there's a lot we can do to improve this of course! I'd love to have incremental caching of docker images like we have with sccache. With sccache if you send a PR to bors and it fails to build on one platform then all the other platforms are still cached. With Travis's cache then we only cache if the build entirely succeeds, which is tight for things like the docker image updates.

Overall it's basically impossible I think for us to get uncached builds to be under 3 hours, mainly because of dist builders (which require building gcc toolchains) and three (!) versions of LLVM. In that sense I see the main takeaway here is that we should improve the caching strategy with docker layers on Travis, which I've always wanted to do for sure!

@alexcrichton
Copy link
Member

Another idea is that this may be a perfect application for Travis's "stages". If we figure out an easy way to share the docker layers between images (aka fast network transfers) then we could split up the docker build stage from the actual rustc build stage. I think that'd far extend our timeout and we'd always comfortably fit within the final time limit.

@alexcrichton
Copy link
Member

Er scratch the idea of travis stages, they wouldn't work here. It looks like Travis stages only support parallelism inside one stage, and othewise stages are sequential. That's not quite what we want here...

alexcrichton added a commit to alexcrichton/rust that referenced this issue Mar 22, 2018
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:

* Caches are only updated for successful builds, meaning that if a build times
  out or fails in a different location the sucessfully-created docker images
  isn't always cached. While this makes sense as a general rule of caches it
  hurts our use cases.

* Caches are per-branch and builder which means that we don't have a separate
  cache on each release channel. All our merges go through the `auto` branch
  which means that they're all sharing the same cache, even those for merging to
  master/beta. This means that PRs which switch between master/beta will keep
  rebuilting and having cache misses.

* Caches have historically been invaliated somewhat regularly a little more
  aggressively than we'd want (I think).

* We don't always need to update the contents of the cache if the Docker image
  didn't change at all, and saving off the docker layers can sometimes be quite
  expensive.

For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!

The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.

This is hopefully a step towards mitigating rust-lang#49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
bors added a commit that referenced this issue Mar 22, 2018
ci: Don't use Travis caches for docker images

This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:

* Caches are only updated for successful builds, meaning that if a build times
  out or fails in a different location the sucessfully-created docker images
  isn't always cached. While this makes sense as a general rule of caches it
  hurts our use cases.

* Caches are per-branch and builder which means that we don't have a separate
  cache on each release channel. All our merges go through the `auto` branch
  which means that they're all sharing the same cache, even those for merging to
  master/beta. This means that PRs which switch between master/beta will keep
  rebuilting and having cache misses.

* Caches have historically been invaliated somewhat regularly a little more
  aggressively than we'd want (I think).

* We don't always need to update the contents of the cache if the Docker image
  didn't change at all, and saving off the docker layers can sometimes be quite
  expensive.

For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!

The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.

This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
alexcrichton added a commit to alexcrichton/rust that referenced this issue Mar 23, 2018
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:

* Caches are only updated for successful builds, meaning that if a build times
  out or fails in a different location the sucessfully-created docker images
  isn't always cached. While this makes sense as a general rule of caches it
  hurts our use cases.

* Caches are per-branch and builder which means that we don't have a separate
  cache on each release channel. All our merges go through the `auto` branch
  which means that they're all sharing the same cache, even those for merging to
  master/beta. This means that PRs which switch between master/beta will keep
  rebuilting and having cache misses.

* Caches have historically been invaliated somewhat regularly a little more
  aggressively than we'd want (I think).

* We don't always need to update the contents of the cache if the Docker image
  didn't change at all, and saving off the docker layers can sometimes be quite
  expensive.

For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!

The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.

This is hopefully a step towards mitigating rust-lang#49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
bors added a commit that referenced this issue Mar 23, 2018
ci: Don't use Travis caches for docker images

This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:

* Caches are only updated for successful builds, meaning that if a build times
  out or fails in a different location the sucessfully-created docker images
  isn't always cached. While this makes sense as a general rule of caches it
  hurts our use cases.

* Caches are per-branch and builder which means that we don't have a separate
  cache on each release channel. All our merges go through the `auto` branch
  which means that they're all sharing the same cache, even those for merging to
  master/beta. This means that PRs which switch between master/beta will keep
  rebuilting and having cache misses.

* Caches have historically been invaliated somewhat regularly a little more
  aggressively than we'd want (I think).

* We don't always need to update the contents of the cache if the Docker image
  didn't change at all, and saving off the docker layers can sometimes be quite
  expensive.

For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!

The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.

This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
alexcrichton added a commit to alexcrichton/rust that referenced this issue Mar 23, 2018
…ennytm

ci: Don't use Travis caches for docker images

This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:

* Caches are only updated for successful builds, meaning that if a build times
  out or fails in a different location the sucessfully-created docker images
  isn't always cached. While this makes sense as a general rule of caches it
  hurts our use cases.

* Caches are per-branch and builder which means that we don't have a separate
  cache on each release channel. All our merges go through the `auto` branch
  which means that they're all sharing the same cache, even those for merging to
  master/beta. This means that PRs which switch between master/beta will keep
  rebuilting and having cache misses.

* Caches have historically been invaliated somewhat regularly a little more
  aggressively than we'd want (I think).

* We don't always need to update the contents of the cache if the Docker image
  didn't change at all, and saving off the docker layers can sometimes be quite
  expensive.

For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!

The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.

This is hopefully a step towards mitigating rust-lang#49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
bors added a commit that referenced this issue Mar 25, 2018
ci: Don't use Travis caches for docker images

This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:

* Caches are only updated for successful builds, meaning that if a build times
  out or fails in a different location the sucessfully-created docker images
  isn't always cached. While this makes sense as a general rule of caches it
  hurts our use cases.

* Caches are per-branch and builder which means that we don't have a separate
  cache on each release channel. All our merges go through the `auto` branch
  which means that they're all sharing the same cache, even those for merging to
  master/beta. This means that PRs which switch between master/beta will keep
  rebuilting and having cache misses.

* Caches have historically been invaliated somewhat regularly a little more
  aggressively than we'd want (I think).

* We don't always need to update the contents of the cache if the Docker image
  didn't change at all, and saving off the docker layers can sometimes be quite
  expensive.

For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!

The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.

This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
@steffengy
Copy link
Contributor

@alexcrichton

Maybe similarily to #49284 it would be better, to upload all intermediate images (all layers) to S3?
Then it would be able to resume with the last failed step after a failure and for success be also cached entirely.
There shouldn't be any other images in the builder's cache, since it's only used to run that dockerfile right? If that wasn't the case, one could still collect all image IDs and diff against the created ones by docker build and upload them with their dependencies.

@alexcrichton
Copy link
Member

@steffengy seems plausible to me! Right now it's only done on success but I think we could do it on failure as well

@steffengy
Copy link
Contributor

steffengy commented Apr 4, 2018

@alexcrichton
The only trouble with that is that currently the cache-hash is generated by hashing all files
in the respective $IMAGE (e.g. dist-x86_64-linux) folder.
So fixing a failure would automatically invalidate/change that hash, since you'd likely have to edit files in the folder of the $IMAGE.

Was it an intentional design decision to tie caching to such a hash instead of to the $IMAGE itself?

  • Tieing to $IMAGE would keep S3 leaner, since it would only ever contain the newest version of the required images, which garbage collect themselves, since they are simply overwritten on the next run.
  • On success, we'd only export referenced images to ensure we do not include "stale" layers that were introduced by the cache download but due to file changes are not used anymore.
  • On failure, we cannot do that, but I think it's fine to temporarily (until the next success which should only be a few commits away) keep them around.
  • Currently this "entire part" minus "continuing where left off", is taken care of by simply defering it to someone/something to eventually cleanup S3 manually.

@alexcrichton
Copy link
Member

@steffengy tying it purely to $IMAGE was effectively what Travis did (not intentionally, just how it worked out), but that also has a number of downsides:

  • The same cache is shared for the master/beta/stable branches, meaning that when different branches go through bors the cache may be thrashed
  • The same cache is used when images changed, so if a PR is sent to bors and bounces it could invalidate the cache for the next build

I'm not sure which strategy is the better, neither is working that great I think :(

@steffengy
Copy link
Contributor

@alexcrichton Yeah, the first one is definitely an issue (the second one is essentially the same).
I don't see how intelligent caching would work without somehow hooking into docker, since you'd have to know which hash to expect at which stage to download all needed cached layers, which probably only docker itself can do (without a lot of trickery).

Was using a docker registry insteadof S3 for caching discussed before?
I could think of something like (assuming this works as I imagine it does):

docker build --cache-from $(DOCKER_REGISTRY)/rust-cache

Then after each build one would push to e.g. $(DOCKER_REGISTRY)/rust-cache:$(TRAVIS_COMMIT) and the docker build should be able to find all cached layers by going through the registry.

Disclaimer: This is untested and only relies on assumptions and discussions I found regarding this topic.
Also I have no idea how a "normal" docker registry would scale with this much tags & searching.

@alexcrichton
Copy link
Member

@steffengy oh I'd totally be down for using a docker registry as I think it'd for sure solve most problems, I just have no idea how to run one or how we might set that up!

Is this something that'd be relatively lightweight to do?

@steffengy
Copy link
Contributor

steffengy commented Apr 4, 2018

Setting up a registry is the easier part:

  • Running it somewhere where docker is available:
    docker run -d -p 5000:5000 --restart=always --name registry registry:2 + configuring (TLS certificates + and maybe something more)
  • Or Amazon Elastic Container Registry (might be too pricey)
  • Or Use DockerHub (free?) / Gcr.Io / ...

Adjustments to the approach above...

Some local tests showed that --cache-from seems to require a reference to actual image(s) [locally] and cannot just find everything itself.

Resulting Possibilities

  • Option 1
    • After each build, tag the resulting layers as $REGISTRY_URL/$image:$branch
    • Pass $image:$branch and $image:master as tags to --cache-from (or maybe a more suitable baseline branch than master).
      This requires both tags to be docker pulled before, which since they should atleast have the base image or more in common seems okay.
  • Option 2
    • After each build, tag the resulting layers as $REGISTRY_URL/$image:$commit
    • Before each build fetch the image tagged with the previous commits hash.
      Not sure if this would work feasibly, since I doubt we can assume sequential builds (as in builds that
      only are started after a previous one has finished).
      We could just walk each commit backwards until we find one that's cached (with a limit) though.

@alexcrichton
For me Option 1 might sound promising, however there're things I do not like so far:

    1. a slow build on the beta branch might still result in one slow build cycle for the beta branch, e.g. if image changes are backported from stable to beta, while master is entirely different.
      (This seems not very likely)
    1. it leads to "tag pollution" in the registry, so we'll need some cleanup. Also Scaling might be an issue
      That would include 100 branches (Option 1) or 100 commits (Option 2)
      (AWS Container Registry is limited to only 100 tags per image)

Let me know what you think.

@alexcrichton
Copy link
Member

@steffengy hm so I may not be fully understanding what this implies, but isn't the registry approach the same as just using a cache location determined by the image/branch name? We're sort of running a "pseudo registry" today with curl and docker load it seems, and I'm not sure what --cache-from would do that's not already being done.

I could be missing something though!

@ishitatsuyuki
Copy link
Contributor

Also take a look at buildkit: it offers more flexibility over docker build, such as mounting a volume and efficient caching.

@steffengy
Copy link
Contributor

@alexcrichton Yeah, it wouldn't really provide much to justify the additional work over tagging by image&branch on S3.

@ishitatsuyuki suggestion of buildkit might be interesting, but seems to require quite a bit of work:

  • There're --export-cache and --import-cache flags, which it uses to query a registry or local source for the layers it might need.
  • This unfortunately doesn't help us, since this only caches the last build aswell and wouldn't solve our issue with branches trashing caches.

A few more ideas for inspiration:

  • Modify buildkit to support caching/fetching layers to/from S3 (the code is unfortunately very sparely documented and maintaining might be work...)
    (Modifying docker wouldn't work, click here for more information why (other instruction cache architecture))
  • Propose "multi-revision" caching to buildkit's upstream. I fear that the registry infrastructure wouldn't efficiently handle many revisions though. Still might be worth a shot to get some ideas from them, so I might do this soon.
  • Version all our dockerfiles so we explicitly depend on versions that are published to S3/a registry, this would probably be quite a lot of maintaining work as well. Doesn't sound realistic to me.
  • Roll our own caching layer, based on S3 and something parsing Dockerfiels & files it depends on.

@alexcrichton
Copy link
Member

@steffengy I think for now we should probably jsut switch to image:branch caching and see how that works. That's much closer to what Travis was already doing and is incremental better in that it caches based on the branch as well.

Learning the branch isn't the easiest thing in the world right now unfortunately but I imagine we could throw something in there to make it not too bad.

@steffengy
Copy link
Contributor

@alexcrichton Maybe it makes sense to combine that with trying to load from both image:branch and image:master on the start of the build (or whatever branch(es) make sense) to allow for cross-branch cass-synergy? Ofcourse that would mean n-times the download and docker load time, but depending on how little the relative cost is, it might still be worth it.

@alexcrichton
Copy link
Member

@steffengy perhaps yeah but I've found that the curl + docker load can often take quite awhile for larger images, and they change rarely enough today anyway that I don't think it'd outweigh the cost

@steveklabnik
Copy link
Member

Triage: Over two years later, I am not sure if this ticket is still relevant, given the amount of changes we've undergone. Maybe it is, maybe it isn't.

@kennytm
Copy link
Member Author

kennytm commented Sep 25, 2020

The dist-x86_64-linux image has moved from Centos 5 to Debian 6 in #74163, but we are still building GCC 5.5 + Python 2.7 + LLVM from scratch, so the issue still exists.

@sanmai-NL
Copy link

sanmai-NL commented Nov 24, 2022

I saw a lot of custom caching tricks in https://github.com/rust-lang/rust/blob/fd815a5091eb4d49cd317f8ad272f17b7a5f550d/src/ci/docker/run.sh, but little use of the Docker Engine's or other container builders' caching features, such as buildx bake, --cache-from/--cache-to, Dockerfile instruction ordering or COPY --link (locations).

Is a PR in this direction helpful? I'm not sure who to ping, perhaps @hkratz @pietroalbini?

@sanmai-NL
Copy link

And, why the custom caching and can we reduce that at some point?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-enhancement Category: An issue proposing an enhancement or a PR with one. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) T-infra Relevant to the infrastructure team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

6 participants