-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(custom-resources): Package does not exist #30067
(custom-resources): Package does not exist #30067
Comments
As of 11:00 EST on 5/3, we have been seeing a similar error with Python 3.10, CDK 2.134.0, using an AwsSdkCall for SSM's getParameter action. In our case the error is cr.AwsCustomResource(
self,
"get_parameter",
on_update=cr.AwsSdkCall(
service="SSM",
action="getParameter",
parameters={
"Name": parameter_name,
"WithDecryption": True,
},
physical_resource_id=cr.PhysicalResourceId.of(
str(datetime.utcnow()),
),
region=region,
),
policy=cr.AwsCustomResourcePolicy.from_sdk_calls(
resources=[
Stack.of(self).format_arn(
service="ssm",
region=region,
resource="parameter",
resource_name=parameter_name.lstrip("/"),
)
]
),
) The issue also appears to be intermittent for us. |
For now, un-setting @glitchassassin it looks like you're not using the |
Correct, we are not. On Friday, it failed on 2/6 deploys. Today we've had four successful releases so far and no failures. I'm configuring |
Aha, tracked down some logs from Friday! They showed up by default in a Cloudwatch log group named
In another instance:
It seems like each time this runs, there's an initial attempt to install the SDK which always times out after 120 seconds (based on ResourceProperties in the logs, InstallLatestAwsSdk is true even though it isn't explicitly set in our code). The lambda is immediately invoked again, and this time the install either succeeds or fails in under a minute. If it fails, it says it is falling back to pre-installed version. After the install, an Update request is logged, and it returns the parameter it's supposed to be fetching correctly (whether the install failed or succeeded). Then, in some cases, there is a second Update request in the logs a couple minutes later, and that is where the "Package does not exist" error gets thrown. The request is identical to the first Update request except that the physicalResourceId is different (it's using the current date/time as described here.) After reviewing our deployment logs, this seems to only have happened when we had back-to-back deployments within a couple minutes of each other, so the second deployment's Update request hits the same running lambda instance that was created by the first deployment. It looks like when the Lambda doesn't get cleaned up after an install failure, the next Update request fails. |
Based on this: Lines 24 to 57 in 8e98078
Nope! It's actually failing on the Lines 59 to 66 in 8e98078
But there's no try/catch here, so this time when the |
Drafting a PR with a fix |
thanks @athewsey for reporting this issue. There have been multiple incidences of this issue reported by the customers recently Thanks @glitchassassin for submitting a PR. |
Hi, any update on this? This is holding our pipelines right now from fully passing |
Waiting on some guidance on the failing integration tests on the PR - I'm not sure how to resolve the build issues |
@glitchassassin I see that the PR is still open, this is also affecting our deployments, when is this expected to get merged ? And is there any workaround for the same for now ? |
@gg-safe I am still working on getting this merged! I think the workaround for now is to set |
Thanks for the workaround @glitchassassin , this seems to be working, will test more. |
Hi, is there any movement on this? Our deployments (with custom resources) are failing for the exact same issue. In my case they fail regardless of the value of Received response status [FAILED] from custom resource. Message returned: Package @aws-sdk/client-r53 does not exist. |
I've been working through the PR issues with pahud in the CDK Slack; I've cross-posted the latest question on the PR for visibility |
Thank you @glitchassassin -- if there is anything I can do to help move this along just link me to the slack discussion (I'm also in that slack group) |
Hi @glitchassassin, has there been any progress on this? This is currently blocking some of our Production workflows. If there's anything I can help with to speed this along, please let me know. |
@ethanr-bjss It looks like the PR that I was waiting on has been merged, so we should be good to update the integration test snapshots. I'll get started on those now! |
+1 to the issue, I am facing this issue or probably a similar issue for Error message: |
Updated the issue to |
Comments on closed issues and PRs are hard for our team to see. |
1 similar comment
Comments on closed issues and PRs are hard for our team to see. |
Describe the bug
I'm trying to use
AwsCustomResource
from Python for a couple of actions on@aws-sdk/client-cognito-identity-provider
, and deployment keeps failing with errors like:Expected Behavior
The affected resource (see repro steps below) should deploy successfully and create a user in the provided Cognito user pool.
Current Behavior
I'm getting the above mentioned error message and the resource fails to create (or rollback/delete). Also tried providing the service name as
CognitoIdentityServiceProvider
but this gave the same error message (with@aws-sdk/client-cognito-identity-provider
package name)Possibly this may be intermittent, as I managed to get the stack to deploy (update existing to add this resource) at least once? But now facing the error consistently.
Reproduction Steps
Given Python CDK construct with a resource something like:
...Try to deploy the stack
Possible Solution
🤷♂️
Additional Information/Context
Originally observed on CDK v1.126.0, so tried upgrading to 2.140.0 but it didn't help.
CDK CLI Version
2.140.0
Framework Version
2.140.0
Node.js Version
20.9.0
OS
macOS 14.4.1
Language
Python
Language Version
Python 3.12.1
Other information
Seems possibly related to #28005, which was closed due to inactivity but raised against an older CDK version.
The text was updated successfully, but these errors were encountered: