-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Destroy/recreate DB instance on minor version update rather than updating #9401
Comments
Hmm looks like that uses the same api as
|
The same for me while upgrading the minor version of an Aurora MySQL Database.
|
Try ignoring changes to the engine version on the resource "aws_rds_cluster" "main" {
apply_immediately = true
cluster_identifier = "my-cluster"
engine = "aurora-postgresql"
engine_version = "10.7"
# other attributes omitted
}
resource "aws_rds_cluster_instance" "cluster_instance" {
apply_immediately = true
identifier_prefix = "my-instance"
cluster_identifier = aws_rds_cluster.main.id
engine = aws_rds_cluster.main.engine
engine_version = aws_rds_cluster.main.engine_version
# other attributes omitted
lifecycle {
create_before_destroy = true
ignore_changes = [engine_version]
}
} My simple experimentation shows that when you Note that I've also marked the cluster instances as |
AWS Provider: 2.38 We have the same issue with aurora, but the instances once destroyed they cannot be recreated
|
As @jcarlson already wrote the solution is to work with the engine_version on the cluster only and leave the engine_version on the cluster_instance out, since it is optional. When doing so, Terraform does an inplace upgrade of the cluster and AWS RDS upgrades the cluster_instance's itself. Terraform then sees no difference on the cluster_instance and does nothing. |
@pioneer2k have you tried that with |
@nywilken this issue is related to |
Hello Guys, I have created a template if I change anything in this template, it will delete the rds and create it again. is there a way where we can only modifying the rds instead of deleting? |
I was not able to reproduce this issue. In all the cases I tried, upgrades worked fine as long as the engine version was managed in |
@bill-rich quick question, that means that we can omit the Also what about using a global cluster? every engine_version change requires recreation when uses a global cluster: https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/resource_aws_rds_global_cluster.go#L61 We can manage this with a similar approach? |
Hi @marinsalinas! That is correct on only including |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Terraform Version
Terraform v0.12.3
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/gbataille/9c7b6084614b1b6c022342c48dbb80f7
Expected Behavior
DB cluster and DB instances are upgraded in place like if you did it through the AWS console.
If you do it from the AWS console, the cluster and the instances are put in upgrading status, a dump is taken, pg_upgrade is run live, the instances are rebooted (~10s) and everything is back up.
Actual Behavior
Instances are destroyed and new ones with the new minor version are re-created
--> it takes way longer
--> the downtime is way longer.
Luckily, since it's Aurora and the data layer is separate from the engine, no data was lost.
Steps to Reproduce
terraform apply
with a RDS Aurora specifying postgresql 10.6terraform apply
with a RDS Aurora specifying postgresql 10.7The text was updated successfully, but these errors were encountered: