-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[cdk-pipelines] Every asset update mutates the pipeline #9080
Comments
I concur, this happened to me as well. |
I think a succeeded 'SelfMutate' action but a failed 'UpdatePipeline' stage is expected in case 'SelfMutate' did actually update the pipeline. The pipeline has
The wording "failed" and the visualization in the Console is confusing at first. But I think that's how CodePipeline deals with restarts internally, i.e. "if I need to start over, I have to fail the current execution so that the new execution can go through without being blocked by the old execution" (pipeline executions cannot overtake each other). Looking at the logs of the CodeBuild project of the 'SelfUpdate' action, I see that my Pipeline is indeed being updated in each iteration, which explains the infinite looping. Those updates to my Pipeline are asset updates (and associated role and policy updates). Complete CodeBuild logs
[Container] 2020/07/15 12:57:16 Running command cdk -a . deploy PipelineStack --require-approval=never --verboseCDK toolkit version: 1.51.0 (build 8c2d53c)Command line arguments: { _: [ 'deploy' ], a: '.', app: '.', 'require-approval': 'never', requireApproval: 'never', verbose: 1, v: 1, 'ignore-errors': false, ignoreErrors: false, json: false, j: false, ec2creds: undefined, i: undefined, 'version-reporting': undefined, versionReporting: undefined, 'path-metadata': true, pathMetadata: true, 'asset-metadata': true, assetMetadata: true, 'role-arn': undefined, r: undefined, roleArn: undefined, staging: true, 'no-color': false, noColor: false, fail: false, 'build-exclude': [], E: [], buildExclude: [], ci: false, execute: true, force: false, f: false, parameters: [ {} ], 'previous-parameters': true, previousParameters: true, '$0': '/usr/local/bin/cdk', STACKS: [ 'PipelineStack' ], stacks: [ 'PipelineStack' ] }merged settings: { versionReporting: true, pathMetadata: true, output: 'cdk.out', app: '.', context: {}, tags: [], assetMetadata: true, requireApproval: 'never', toolkitBucket: {}, staging: true }Toolkit stack: CDKToolkitSetting "CDK_DEFAULT_REGION" environment variable to eu-central-1Resolving default credentialsLooking up default account ID from STSDefault account ID: 1111111111Setting "CDK_DEFAULT_ACCOUNT" environment variable to 1111111111context: { 'aws:cdk:enable-path-metadata': true, 'aws:cdk:enable-asset-metadata': true }--app points to a cloud assembly, so we bypass synthPipelineStack: deploying...Assuming role 'arn:aws:iam::1111111111:role/hnb659fds-deploy-role-1111111111-eu-central-1'.Waiting for stack CDKToolkit to finish creating or updating...[0%] start: Publishing 65a6bd9bc7f71e9fd2a17c7f41bf3d87f620c2930b4ed93db84329371c27:1111111111-eu-central-1Retrieved account ID 1111111111 from disk cacheAssuming role 'arn:aws:iam::1111111111:role/hnb659fds-file-publishing-role-1111111111-eu-central-1'.[0%] check: Check s3://cdk-hnb659fds-assets-1111111111-eu-central-1/65a6bd9bc7f71e9fd2a17c7f41bf3d87f620c2930b4ed93db84329371c27[0%] upload: Upload s3://cdk-hnb659fds-assets-1111111111-eu-central-1/65a6bd9bc7f71e9fd2a17c7f41bf3d87f620c2930b4ed93db84329371c27[100%] success: Published 65a6bd9bc7f71e9fd2a17c7f41bf3d87f620c2930b4ed93db84329371c27:1111111111-eu-central-1PipelineStack: checking if we can skip deployPipelineStack: template has changedPipelineStack: deploying...Attempting to create ChangeSet CDK-5bd31a53-9245-454a-b8cf-60f8fc3afcb6 to update stack lineStackPipelineStack: creating CloudFormation changeset... ✅ PipelineStack Stack ARN: [Container] 2020/07/15 12:59:37 Phase complete: BUILD State: SUCCEEDED
So the underlying issue (bug?) is the assets (or asset hashes) being updated in the 'Build' stage, even when the source code has not changed (same git commit). I should maybe add that I'm using my own CodeBuild project to create the CDK synth (as outlined in the API reference) to build my lambdas and React SPA along with my CDK app. const buildProject = new codebuild.PipelineProject(this, 'BuildProject', {
buildSpec: codebuild.BuildSpec.fromSourceFilename('buildspec.yml'),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_4_0,
},
});
const synthAction = new codepipeline_actions.CodeBuildAction({
actionName: 'Build',
project: buildProject,
input: sourceArtifact,
outputs: [cloudAssemblyArtifact],
});
const pipeline = new pipelines.CdkPipeline(this, 'Pipeline', {
pipelineName: 'Pipeline',
cloudAssemblyArtifact,
sourceAction,
synthAction,
}); My version: 0.2
phases:
install:
commands:
# download binaries
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR/infra/lambda # root of my Go lambdas
- task deps
- cd $CODEBUILD_SRC_DIR/web # root of React SPA
- yarn install
- cd $CODEBUILD_SRC_DIR/infra/ # root of my CDK app, which makes references to lambda and SPA build outputs produced in the "build" phase using lambda.AssetCode.fromAsset and s3deploy.Source.asset
- yarn install
build:
commands:
- cd $CODEBUILD_SRC_DIR/infra/lambda
- task build package
- cd $CODEBUILD_SRC_DIR/web
- yarn build
- yarn package:staging
- yarn package:prod
- cd $CODEBUILD_SRC_DIR/infra/
- yarn build
- yarn cdk synth
artifacts:
base-directory: infra/cdk.out
files:
- '**/*' I don't understand why my assets are being updated on every build. Clearly, Go and Webpack produce new (different) output files (different file creation dates, etc.) for the same input files on every build. But this shouldn't trigger CFN resource updates. I hope to get help here because I'm currently stuck. |
I see similar behaviour in my pipeline: Expected behaviour: Actual behaviour: Setup: Possible explanation: |
FWIW I had the same issue when running parcel to bundle my frontend code to be deployed with aws-s3-deployment. Turns out that parcel has a bug, where it includes the current directory into the hash. That creates new assets on every build, and causes the pipeline to update itself forever. It seems that CDK shouldn't update the pipeline if an asset changes. The Asset stage should simply upload all assets in cdk.out, not create specific actions for each asset hash that require modifying the pipeline. |
I totally agree with @nonken and @christophgysin. As it turns out the hashes of my assets change with every build (Webpack generates JS chunks with short random IDs), so that's why the pipeline loops forever. |
@asterikx I think I am facing a similar issue with webpack. However, my docker images also cause it to get rebuilt as well. So I think it is both. |
Having the same issue. In my build step I need to build my artifacts using Docker. The resulting asset hash constantly changes leading into the loop |
I'm having the same issue. I have a similar setup to what they describe in this blog: https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/ Any known workaround at this point? |
The current implementation creates a publishing action for each asset in order to maximize parallelism. Assets are identified by a hash of their contents in order to enable heavy caching throughout the pipeline and automatically invalidate only the needed components. The unfortunate side effect of this design is that any asset change results a mutation of the pipeline. We will need to explore approaches to avoid that (e.g. by creating publish actions based on the number of assets and not on the actual asset hash). I am retitling this issue so we can follow up on this specific problem. The other problem is if the asset hash is "unstable" - as in, a different hash is produced in every build. This may happen if the build output has e.g. timestamps in it and the hash is calculated on the bundle output. To circumvent that, the framework offers knobs to control how the hash is calculated. Namely |
@eladb thanks! So it seems to me that for any compiled binaries or bundled files with random IDs or timestamps, it will be necessary to calculate the hash over the source files manually. E.g. new s3deploy.BucketDeployment(this, 'DeploySite', {
sources: [
s3deploy.Source.asset(path.join(__dirname, '../web/dist'), {
assetHash: cdk.FileSystem.fingerprint(path.join(__dirname, '../web'), { exclude: ['dist'] }),
}),
],
destinationBucket: siteBucket,
distribution,
distributionPaths: ['/index.html', '/runtime-config.js'],
}); Would it be possible to generate a token from the source path and use that as name for the publishing action, e.g. |
If we only use the path to calculate the hash, the asset won't be invalidated when it changes and will never be updated. As I mentioned, I think we need to figure out a way to avoid pipeline updates when assets are changed and the way to do that would probably be to define publishing actions that are not coupled directly with the asset hash but rather with the number of assets in the app. If a new asset is added, the pipeline will be updated (another publish action is needed) but if assets are only updated, the pipeline won't need to change because we will already have a publishing action for each asset, and only the ones that changes will actually upload data (the rest will succeed with no-op). |
Currently, changes to the asset hashes of content results in changes to the definitions of the pipeline, causing the self-mutate state to trigger. This can cause infinite loops for systems where build artifacts change on each build. The cause for this looping is that the asset hash was embedded as the pipeline action name. By replacing the asset hash with a simple asset counter (already used for the action CloudFormation id), the pipeline definition should remain stable on asset changes. _Testing:_ Due to the pipeline executing the current version of CDK within the pipeline, it's impossible to actual verify that this corrects the issue. To test + verify, created a sample app where the asset hash changed on each build. Then ran subsequent 'cdk synth' commands and diff'ed the output. Prior to this change, there were numerous changes related to the asset hash in the pipeline stack. After, only the buildspec of the CodeBuild project that referenced the hash changed. This does not change the definition of the CodePipeline itself. fixes #9080 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Currently, changes to the asset hashes of content results in changes to the definitions of the pipeline, causing the self-mutate state to trigger. This can cause infinite loops for systems where build artifacts change on each build. The cause for this looping is that the asset hash was embedded as the pipeline action name. By replacing the asset hash with a simple asset counter (already used for the action CloudFormation id), the pipeline definition should remain stable on asset changes. _Testing:_ Due to the pipeline executing the current version of CDK within the pipeline, it's impossible to actual verify that this corrects the issue. To test + verify, created a sample app where the asset hash changed on each build. Then ran subsequent 'cdk synth' commands and diff'ed the output. Prior to this change, there were numerous changes related to the asset hash in the pipeline stack. After, only the buildspec of the CodeBuild project that referenced the hash changed. This does not change the definition of the CodePipeline itself. fixes aws#9080 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Currently, changes to the asset hashes of content results in changes to the definitions of the pipeline, causing the self-mutate state to trigger. This can cause infinite loops for systems where build artifacts change on each build. The cause for this looping is that the asset hash was embedded as the pipeline action name. By replacing the asset hash with a simple asset counter (already used for the action CloudFormation id), the pipeline definition should remain stable on asset changes. _Testing:_ Due to the pipeline executing the current version of CDK within the pipeline, it's impossible to actual verify that this corrects the issue. To test + verify, created a sample app where the asset hash changed on each build. Then ran subsequent 'cdk synth' commands and diff'ed the output. Prior to this change, there were numerous changes related to the asset hash in the pipeline stack. After, only the buildspec of the CodeBuild project that referenced the hash changed. This does not change the definition of the CodePipeline itself. fixes aws#9080 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Tagging @rix0rrr to confirm whether there possibly was a regression for this. |
Hi @ldecaro, The asset hash for the Dockerfile is based on the contents of the docker directory. Are those changing on every synth, perhaps? Are they generated on the fly? Do they have a timestamp in them? |
Hi @rix0rrr. Yes, they change every time or most of the time. The reason for this, is because the application inside the container is expected to change most of the times the pipelines runs. Inside the directory I only keep the dockerfile and a jar file. In this case, should the pipeline self update also on every run? |
The scenario is as follows: The idea is that the second go-around of the synth uses the same source, and so even though the source change frequency might be high, it shouldn't have changed in the time it takes to do the self-update and restart. Ultimately the hash shouldn't change anymore, and the pipeline continues. That is of course: unless there is something in the synth step itself that introduces nondeterminism. |
Thank you for clarifying. In this case, I confirm the diagram above shows the behavior I’ve been seeing. It doesn’t stay in an infinite loop but it mutates, once, every time the asset changed and it usually changes on every commit. This is an extra 85 seconds, in general, to mutate the pipeline/asset. |
Alright. I plan to change that, but it is what it is for now |
I think I ran into the same or a similar problem with a CDK Pipeline where the entire stack and application code for Lambda functions is written in Java 11. I may have been able to solve this problem by doing two things:
(see Achieving stable task inputs in the Gradle documentation)
I hope this workaround will help people running into the same issue with a Java setup. |
I have experienced problems with the way assets publishing is implemented in the CodePipeline construct a couple of times now and I see the speed of publishing the assets in parallel as such a minor benefit over the issues it has caused me that I created a CLI tool that allows you to publish the assets based on the Im also working on a custom Pipeline construct which has this as a native feature and solves a couple of other issues I see with the CDK native CodePipeline L3 construct. I hope this helps some people. |
I'm experiencing this endless loop where the asset hashes keep changing for reasons I do not understand. This was working just fine for months, so it's not clear where the issue is coming from. Everything else in the synthed template remains the same. AFAIK, synth is not introducing nondeterminism as it hasn't changed. The dockerfiles are exactly the same between runs, but the hashes keep changing.
What else can I do to debug this? |
Update: I reverted to It's now able to progress to For context:
I would like to switch back to Can @rix0rrr maybe shed some light? Possible regression or user error? |
Can this issue be prioritized? It's been a P1 for a while now. |
The
UpdatePipepline
stage fails although theSelfMutate
action has completed successfully and causes the pipeline to start over again. This leads to the pipeline getting caught in an infinite loop.Reproduction Steps
I have a CDK app with a
PipelineStack
containing aCdkPipeline
(with twoCdk.Stages
). I committed my CDK code to git, and then runcdk deploy PipelineStack --profile ...
(all environments were bootstrapped before).Error Log
After creating the pipeline, the first pipeline execution is being triggered by a
CreatePipeline
event.Source
andBuild
stage succeed, theSelfMutate
action succeeds, but theUpdatePipeline
stage fails. The next execution gets triggered by aStartPipelineExecution
event (probably triggered by the pipeline itself?).I don't know where to find more detailed error messages of what causes the stage to fail. Please point me to the releavnt locations and I will provide more information.
Environment
Other
Other users have reported the same issue on Gitter, but no resolution has been provided so far.
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: