-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-codepipeline] Cyclic reference error when importing an S3 bucket in a code pipeline defined in a different stack #10896
Comments
Hello @harshchhatwani , can you show the exact error that you get? Thanks, |
Hi @skinny85: The error message contains confidential details of the project so I can't print that. Here's the obfuscated version
|
@harshchhatwani can you try the solution mentioned here: #5657 (comment) , and see if that works for you? |
Thanks for the pointer @skinny85: This did not work for me. I was able to get rid of cyclic error during build time with this, and even deploy the stack successfully but when I upload an object to the S3 bucket the pipeline fails with "Insufficient permissions Looks like this particular line of code is causing this. Without the above workaround, I get a "cyclic reference" error during the build and with the above workaround the role doesn't have required permissions |
Oh I know why that is I think. It's because of this line: trigger: S3Trigger.EVENTS, Can you comment out that line from your |
@skinny85 : That line is in place to use CloudTrail events to trigger the pipeline instead of having the pipeline poll S3 for updates, which is the recommended way per the AWS documentation. What would be the reason to comment this out? Also, if possible, can we get this reproduced on your end to find the fix and thereby avoid the feedback loop of me trying potential workarounds? It'll save me the churn due to back and forth here. :) |
The reason would be it's the Event that is causing this cycle I believe.
I did, and it did help locally. |
@skinny85 So is the proposal to not use Event and instead have code pipeline poll the S3 bucket for changes? |
That's a temporary fix while I work on this more deeply, yes 🙂. You can also try creating the Event explicitly in the Pipeline Stack, something like: const pipelineSourceAction = new S3SourceAction({
actionName: "S3SourceAction",
bucket: s3Bucket, // Versioned Bucket (and CloudTrail) defined in a different stack
bucketKey: `${id}/upload.zip`,
output: pipelineSourceActionOutput,
trigger: S3Trigger.NONE,
runOrder: 1,
});
const importedBucket = s3.Bucket.fromBucketName(this, 'ImportedBucket', s3Bucket.bucketName);
importedBucket.onCloudTrailWriteObject('EventId', {
target: new event_targets.CodePipeline(pipeline),
paths: [`${id}/upload.zip`],
}); That should make it work with Events, without the need for polling. |
Reference documentation - here @skinny85 unfortunately not, const s3key = 'src.zip';
const trail = new Trail(this, 'Trail', {
isMultiRegionTrail: false,
});
trail.addS3EventSelector(
[
{
bucket: props.bucket,
objectPrefix: s3key,
},
],
{
readWriteType: ReadWriteType.WRITE_ONLY,
},
);
const s3SourceArtifact = new codepipeline.Artifact('source-code-artifact');
pipeline.addStage({
stageName: 'Source',
actions: [
new codepipelineActions.S3SourceAction({
actionName: 'S3Source',
role: pipelineRole,
bucket: props.bucket,
bucketKey: `/${s3key}`,
output: s3SourceArtifact,
trigger: S3Trigger.NONE,
}),
],
});
// when this is commented out, there is no circular dependency
props.bitbucketStore.onCloudTrailWriteObject('EventId', {
target: new CodePipelineTarget(pipeline),
paths: [s3key],
}); Some additional bugs I see: trail.addS3EventSelector([
{
bucket: props.bucket,
objectPrefix: '/prefix',
}], {
readWriteType: ReadWriteType.WRITE_ONLY,
}); creates a trail with double slash instead of single one (which is inconsistent with
const trail = new Trail(this, 'Trail', { }); and then updating the CDK to: const trail = new Trail(this, 'Trail', {
isMultiRegionTrail: false,
}); results in Let me know if you'd like the last two to be reported separately and if you have any idea for another workaround for setting up Trail events to CodePipeline. Manual workaround:
|
Yes please - those sound like separate problems with the |
What's |
It's an S3 Bucket created from another stack and passed as props, like this:
where StackProps are defined like this:
|
I have a little trouble following your intent here, unfortunately. You have a CodePipeline with an S3 source: pipeline.addStage({
stageName: 'Source',
actions: [
new codepipelineActions.S3SourceAction({
actionName: 'S3Source',
role: pipelineRole,
bucket: props.bucket,
bucketKey: `/${s3key}`,
output: s3SourceArtifact,
trigger: S3Trigger.NONE,
}),
],
}); But you also want to trigger the pipeline on changes to a different S3 Bucket...? props.bitbucketStore.onCloudTrailWriteObject('EventId', {
target: new CodePipelineTarget(pipeline),
paths: [s3key],
}); Can you explain your reasoning behind this setup? |
No, that's the same bucket and a standard scenario. I have an S3 Bucket and I want events on that bucket to be emitted and trigger exactly that pipeline. The What I'm trying to achieve is a full CDK code that:
|
So, the reason for the cycle here is quite clear:
You can break the cycle by moving the Event that triggers the CodePipeline from Stack1 to Stack2 (you'll need to create it explicitly, like |
Yes, indeed, the cycle is clear and I was trying to follow the suggested code, but that also creates the circular dependency. So again, just to be clear:
const trail = new Trail(this, 'BitBucketUploadsTrail', {
isMultiRegionTrail: false,
});
trail.addS3EventSelector(
[
{
bucket: props.bitbucketStore,
objectPrefix: s3key,
},
],
{
readWriteType: ReadWriteType.WRITE_ONLY,
},
);
pipeline.addStage({
stageName: 'Source',
actions: [
new S3SourceAction({
actionName: 'S3Source',
runOrder: 1,
role: devOpsPipelineRole,
bucket: props.bitbucketStore,
bucketKey: `${s3key}`,
output: new Artifact();,
trigger: S3Trigger.NONE,
}),
],
});
// props.bitbucketStore.onCloudTrailWriteObject('BitBucketZipUploadEvent', {
// target: new CodePipelineTarget(pipeline),
// paths: [s3key],
// }); Now:
I am looking at aws-events-targets-readme.html#start-a-codepipeline-pipeline but I don't see a method to create an S3-based/CloudTrail-based rule example. Do you have any code example for this that could be added also to the docs in the scope of this issue? |
I have figured out a way, though I'd say it's not CDK native, as it's a mix of CDK code and JSON from the tutorial page, but it works: new Rule(this, 'rule', {
eventPattern: {
source: ['aws.s3'],
detailType: ['AWS API Call via CloudTrail'],
detail: {
eventSource: ['s3.amazonaws.com'],
eventName: ['CopyObject', 'PutObject', 'CompleteMultipartUpload'],
requestParameters: {
bucketName: [props.bitbucketStore.bucketName],
key: [s3key],
},
},
},
targets: [new CodePipelineTarget(pipeline)],
}); (I guess it could be added to the samples I mentioned above, happy to provide a PR if you tell me where). |
Yes, that's exactly what I meant in #10896 (comment) 🙂. |
Had this same issue, but inside a single stack. export class BuildStack extends DeploymentStack {
constructor(scope: App, id: string, props: DeploymentStackProps) {
super(scope, id, props);
const repository = new Repository(this, 'Repository', {
repositoryName: 'Application',
description: 'Read-only CodeCommit repo.'
});
const accessLogsKey = new SecureKey(this, 'AccessLogsKey', {
enableKeyRotation: true,
trustAccountIdentities: true,
alias: 'app-encryption-key'
});
accessLogsKey.addToResourcePolicy(new SecurePolicyStatement({
effect: Effect.ALLOW,
principals: [new SecureAccountRootPrincipal()],
actions: [
'kms:Decrypt',
'kms:DecryptKey'
],
resources: ['*']
}));
const accessLogsBucket = new SecureBucket(this, 'AccessLogsBucket', {
bucketName: 'app-access-logs',
blockPublicAccess: SecureBlockPublicAccess.BLOCK_ALL,
encryption: BucketEncryption.KMS,
encryptionKey: accessLogsKey,
enforceSSL: true,
// The access logs bucket can be exempt from requiring its own access logs bucket
serverAccessLogsBucket: Exempt(undefined)
});
accessLogsBucket.addToResourcePolicy(new SecurePolicyStatement({
effect: Effect.ALLOW,
principals: [new SecureAccountRootPrincipal()],
actions: [
's3:GetObject*',
's3:GetBucket*',
's3:List*'
],
resources: [accessLogsBucket.bucketArn, `${accessLogsBucket.bucketArn}/*`]
}));
const artifactBucket = new SecureBucket(this, 'ArtifactBucket', {
bucketName: 'app-artifacts',
encryption: Exempt(BucketEncryption.S3_MANAGED),
enforceSSL: true,
serverAccessLogsBucket: accessLogsBucket
});
artifactBucket.addToResourcePolicy(new SecurePolicyStatement({
effect: Effect.ALLOW,
principals: [new SecureAccountRootPrincipal()],
actions: [
's3:GetObject*',
's3:GetBucket*',
's3:List*'
],
resources: [artifactBucket.bucketArn, `${artifactBucket.bucketArn}/*`]
}));
const project = new Project(this, 'Project', {
projectName: 'Application-AutomatedBuild',
source: Source.codeCommit({
repository: repository
}),
environment: {
buildImage: WindowsBuildImage.WIN_SERVER_CORE_2019_BASE,
computeType: WindowsBuildImage.WIN_SERVER_CORE_2019_BASE.defaultComputeType,
},
role: new SecureRole(this, 'ProjectRole', {
roleName: 'CodeBuildProjectRole-DO-NOT-DELETE',
assumedBy: new SecureServicePrincipal('codebuild.amazonaws.com')
}),
artifacts: Artifacts.s3({
bucket: artifactBucket,
includeBuildId: false,
packageZip: false
})
});
// Required policies to access goshawk (in order to pull CodeArtifact NPM packages)
project.addToRolePolicy(new SecurePolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:GetObject'],
resources: ['arn:aws:s3:::goshawk/*']
}));
project.addToRolePolicy(new SecurePolicyStatement({
effect: Effect.ALLOW,
actions: [
'goshawk:GetAuthorizationToken',
'goshawk:ReadFromRepository'
],
resources: ['*']
}));
const pipeline = new Pipeline(this, 'Pipeline', {
pipelineName: 'Application'
});
pipeline.node.addDependency(repository);
pipeline.node.addDependency(project);
const sourceOutput = new Artifact('sourceOutput');
pipeline.addStage({
stageName: 'Source',
actions: [
new CodeCommitSourceAction({
actionName: 'source',
repository: repository,
branch: 'mainline',
output: sourceOutput
})
]
});
const buildOutput = new Artifact('BuildOutput');
pipeline.addStage({
stageName: 'Build',
actions: [
new CodeBuildAction({
actionName: 'build',
project: project,
input: sourceOutput,
outputs: [buildOutput]
})
]
});
}
} Error:
|
@jzybert in the pipeline, you shouldn't provide Use the |
…n sources that use CloudWatch Events (#20149) When using a newly-created, CDK-managed resource, such as an S3 Bucket or a CodeCommit Repository, as the source of a CodePipeline, when it's in a different Stack than the pipeline (but in the same environment as it), and we use CloudWatch Events to trigger the pipeline from the source, that would result in a cycle: 1. Because the Event Rule is created in the source Stack, that Stack needs the name of the pipeline to trigger from the Rule, and also the Role to use for the trigger, which is created in the target (in this case, the pipeline) Stack. So, the source Stack depends on the pipeline Stack. 2. The pipeline Stack needs the name of the source resource. So, the pipeline Stack depends on the source Stack. The only way to break this cycle is to move the Event Rule to the target Stack. This PR adds an API to the Events module to make it possible for event-creating constructs to make that choice, and uses that capability in the CodePipeline `CodeCommitSourceAction` and `S3SourceAction`. Fixes #3087 Fixes #8042 Fixes #10896 ---- ### All Submissions: * [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md) ### Adding new Unconventional Dependencies: * [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md/#adding-new-unconventional-dependencies) ### New Features * [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/master/INTEGRATION_TESTS.md)? * [ ] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)? *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
…n sources that use CloudWatch Events (aws#20149) When using a newly-created, CDK-managed resource, such as an S3 Bucket or a CodeCommit Repository, as the source of a CodePipeline, when it's in a different Stack than the pipeline (but in the same environment as it), and we use CloudWatch Events to trigger the pipeline from the source, that would result in a cycle: 1. Because the Event Rule is created in the source Stack, that Stack needs the name of the pipeline to trigger from the Rule, and also the Role to use for the trigger, which is created in the target (in this case, the pipeline) Stack. So, the source Stack depends on the pipeline Stack. 2. The pipeline Stack needs the name of the source resource. So, the pipeline Stack depends on the source Stack. The only way to break this cycle is to move the Event Rule to the target Stack. This PR adds an API to the Events module to make it possible for event-creating constructs to make that choice, and uses that capability in the CodePipeline `CodeCommitSourceAction` and `S3SourceAction`. Fixes aws#3087 Fixes aws#8042 Fixes aws#10896 ---- ### All Submissions: * [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md) ### Adding new Unconventional Dependencies: * [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md/#adding-new-unconventional-dependencies) ### New Features * [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/master/INTEGRATION_TESTS.md)? * [ ] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)? *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
❓ General Issue
Hi- I am using CDK to create a code pipeline with an S3 bucket as its source. The S3 bucket and code pipeline are defined in different stacks and per the AWS documentation we can access resources in a different stack as long as they are in the same account and AWS region: https://docs.aws.amazon.com/cdk/latest/guide/resources.html#resource_stack
The Question
This doesn't seem to be working and I am getting a "Cyclic reference" error. Does this mean the AWS documentation is wrong? Or is there something in particular that's missing to make this work? Sample code below
I think this line of code is causing the dependency of my stack containing S3 bucket on the stack that defined code pipeline infra: https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-codepipeline-actions/lib/s3/source-action.ts#L120-L124
Environment
Other information
The text was updated successfully, but these errors were encountered: