-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform plan fails while AWS Elasticache Redis cluster is scaling out #18116
Comments
Implementation note: Based on the error message, this is likely related to how the resource manages tags on the individual cluster nodes. The attempt to read tags on the node has failed because the node has been removed or (possibly) is scaling. |
Is it advisable to catch this error and proceed while skipping any changes based on tags during Elasticache scale up operations? Otherwise, we are effectively DOS-ed from running Terraform for hours (or however long the scale up takes) 😢 |
I've hit this a few times recently. I would also be interested in the answer to the catch question above. |
This functionality has been released in v3.62.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
Hi @gdavison , this issue is still not fixed. Terraform plans continue to fail during the "list tags" operation when the Elasticache cluster is not "available" due to some cluster operation. I'm hoping that perhaps when Elasticache is in the middle of the operation, we can skip refreshing the tag state perhaps.
|
From https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-tags-for-resource.html
The AWS provider ideally should be able to handle this situation gracefully during the @gdavison - I would propose re-opening this ticket as I think #21185 does not address this. (cc @ewbankkit who reviewed the PR) |
Please reopen this issue as this is creating a problem for any elasticache provisioned with TF and happened to run into snapshot state |
Hi. I am facing a similar issue, is there a fix for this.
Thanks? |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform CLI and Terraform AWS Provider Version
Terraform v0.14.5
Affected Resource(s)
Terraform Configuration Files
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
Debug Output
Running terraform cloud which doesn't allow running debug, but I get:
Expected Behavior
When cluster status is not 'available', e.g. due to adding shards, terraform plan/apply should work without error.
Actual Behavior
Whenever cluster is not available due to online resizing, terraform plan/apply fail.
Steps to Reproduce
Important Factoids
References
The text was updated successfully, but these errors were encountered: