Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DockerImageCode: Support automatically using --cache-to and --cache-from #23445

Open
2 tasks
rpbarnes opened this issue Dec 22, 2022 · 18 comments
Open
2 tasks
Labels
@aws-cdk/aws-ecr Related to Amazon Elastic Container Registry feature-request A feature should be added or improved. p2

Comments

@rpbarnes
Copy link

Describe the feature

Deploying docker images via cdk on CI / CD systems rebuilds the entire docker image from scratch on every deploy. It is a major work around to tell cdk's DockerImageCode how to use previously stored images in the cdk's ECR as caches for the next build.

The cdk should by default use the docker --cache-to and --cache-from args when building ECR assets so that each image the cdk builds and uploads to ECR is only incremental based on what's already existing in ECR.

Use Case

This is useful when cdk is used to deploy environment on build machines that don't have access to a local docker cache.

Proposed Solution

When cdk pushes images to ECR tag image with some permanent reference to the asset, potentially the resourceId, such that the image can be referenced on subsequent builds.

When cdk builds images look for existing image asset in ECR before building, if asset exists set the --cache-from flag to point to image.

When cdk builds image set the --cache-to flag to point to the image's tag in ECR.

This description above will add bloat to images. Another solution could be to save a separate caching image alongside each 'production' image. This way the --cache-to and --cache-from flags would point to the caching image and the production image would get build without any of the caching assets.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

CDK version used

2.15.0

Environment details (OS name and version, etc.)

mac osx

@rpbarnes rpbarnes added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Dec 22, 2022
@github-actions github-actions bot added the @aws-cdk/aws-ecr Related to Amazon Elastic Container Registry label Dec 22, 2022
@chris-adam-b12
Copy link

Great idea. I'm facing the same limitation.

Adding the cacheFrom and cacheTo Construct Props to the DockerImageAsset Construct would be the best way to handle it.

@RichiCoder1
Copy link
Contributor

I wonder if this hasn't been done already for the ironic reason that ECR itself doesn't support Cache Manifests. I'd personally love to see this though, makes builds unnecessarily long and/or have do builds outside CDK which somewhat defeats the purpose.

@chris-adam-b12
Copy link

I wonder if this hasn't been done already for the ironic reason that ECR itself doesn't support Cache Manifests. I'd personally love to see this though, makes builds unnecessarily long and/or have do builds outside CDK which somewhat defeats the purpose.

I tried with docker-compose with an ECR registry as cache and it works. It also works with AWS copilot. But I don't find a way with AWS CDK.

@rpbarnes
Copy link
Author

rpbarnes commented Feb 1, 2023

I ended up using depot.dev (a paid docker build solution, it's really fast) and docker compose to completely circumvent the cdk docker build process.

If you're interested I can put together a small write up of what I did with some code samples.

@marcesengel
Copy link

In the meantime it's possible to use buildx with type=local and either S3 or EFS caching on the target folder(s).

@RichiCoder1
Copy link
Contributor

On the note of caching, added #24024 to hopefully at least expose the flags to do so.

@pahud pahud added p2 and removed needs-triage This issue or PR still needs to be triaged. labels Feb 16, 2023
@jamesmcglinn
Copy link

@RichiCoder1 did you find a way to have DockerImageAsset use a container driver rather than the default docker driver?

@RichiCoder1
Copy link
Contributor

RichiCoder1 commented Apr 2, 2023

@RichiCoder1 did you find a way to have DockerImageAsset use a container driver rather than the default docker driver?

I believe there's the (undocumented?) flag CDK_DOCKER which changes the binary it'll use for the build command by default: https://github.com/aws/aws-cdk/blob/main/packages/cdk-assets/lib/private/docker.ts#L261

It must use a docker-compliant CLI API though.

@jamesmcglinn
Copy link

I believe there's the (undocumented?) flag CDK_DOCKER which changes the binary it'll use for the build command by default: https://github.com/aws/aws-cdk/blob/main/packages/cdk-assets/lib/private/docker.ts#L261

It must use a docker-compliant CLI API though.

I'm using a custom build image for selfMutation, creating & bootstrapping a container driver in the prebuild phase to enable --cache-to and --cache-from.

DockerImageAsset appears to be calling docker build from the custom build image but doesn't have the container driver loaded – as though it's using the image from before the buildspec commands were run.

I could script CDK_DOCKER to check for the driver & load if needed but wondering if I've overlooked a simpler approach.

@tomwwright
Copy link
Contributor

tomwwright commented May 15, 2023

Based on the comment here it looks like support for cache manifests for AWS ECR for --cache-to is almost here -- does this unblock this one when it is available?

Interested in this one as well

@madeline-k madeline-k removed their assignment Oct 30, 2023
@modosc
Copy link

modosc commented Nov 8, 2023

it looks like this functionality will be available in ECR when docker 25 is released (or you can manually update buildkit to 0.12):

https://aws.amazon.com/blogs/containers/announcing-remote-cache-support-in-amazon-ecr-for-buildkit-clients/

Copy link

This issue has received a significant amount of attention so we are automatically upgrading its priority. A member of the community will see the re-prioritization and provide an update on the issue.

@github-actions github-actions bot added p1 and removed p2 labels Nov 12, 2023
@BwL1289
Copy link

BwL1289 commented Nov 22, 2023

Also interested. They're waiting for Docker 25, but see Rafa's comment and link.

@BwL1289
Copy link

BwL1289 commented Jan 5, 2024

FYI Docker 25 release candidate 1 was released yesterday

@BwL1289
Copy link

BwL1289 commented Jan 22, 2024

Docker 25 is out. This can now be supported out of the box. reference

@BwL1289
Copy link

BwL1289 commented Feb 28, 2024

Is there an update on this? We should be unblocked by docker 25 release.

@Psynbiotik
Copy link

I'm also curious if there are any updates on this?

@blimmer
Copy link
Contributor

blimmer commented Apr 9, 2024

I spent a decent amount of time getting this working for GitHub actions. Check out https://benlimmer.com/2024/04/08/caching-cdk-dockerimageasset-github-actions/ for details.

I also filed #29768, which might be of interest, too.

@pahud pahud added p2 and removed p1 labels Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-ecr Related to Amazon Elastic Container Registry feature-request A feature should be added or improved. p2
Projects
None yet
Development

No branches or pull requests