-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_ecs_service InvalidParameterException: Creation of service was not idempotent #2283
Comments
Getting this on TF v0.9.11 without placement strategy configured. |
I can confirm, I have the same issue. The workaround is to remove the service manually. |
Same issue. |
Same issue |
Another workaround I found is to rename the service at the same time that the placement strategy is modified. |
Same issue here |
Same again |
Same issue also, any conclusion? |
Note that in the most recent provider versions, this has been changed to |
Hello, just changed a service to use ordered_placement_strategy instead of placement strategy and the terraform apply fails. -/+ module.xxxx.aws_ecs_service.api-gateway (new resource required)
id: "arn:aws:ecs:us-west-2:XXXX:service/api-gateway" => <computed> (forces new resource)
cluster: "cluster" => "cluster"
deployment_maximum_percent: "200" => "200"
deployment_minimum_healthy_percent: "100" => "100"
desired_count: "2" => "2"
health_check_grace_period_seconds: "180" => "180"
iam_role: "arn:aws:iam::xxxx:role/ecs_service_role" => "arn:aws:iam::xxxx:role/ecs_service_role"
launch_type: "EC2" => "EC2"
load_balancer.#: "1" => "1"
load_balancer.3428707558.container_name: "api-gateway" => "api-gateway"
load_balancer.3428707558.container_port: "8080" => "8080"
load_balancer.3428707558.elb_name: "" => ""
load_balancer.3428707558.target_group_arn: "arn:aws:elasticloadbalancing:us-west-2:xxxx:targetgroup/API/4440036037fbdee4" => "arn:aws:elasticloadbalancing:us-west-2:xxxx:targetgroup/API/4440036037fbdee4"
name: "api-gateway" => "api-gateway"
ordered_placement_strategy.#: "" => "1" (forces new resource)
ordered_placement_strategy.0.field: "" => "instanceId" (forces new resource)
ordered_placement_strategy.0.type: "" => "spread" (forces new resource)
placement_strategy.#: "1" => "0" (forces new resource)
placement_strategy.2750134989.field: "instanceId" => "" (forces new resource)
placement_strategy.2750134989.type: "spread" => "" (forces new resource)
task_definition: "arn:aws:ecs:us-west-2:xxxx:task-definition/api-gateway:58" => "${aws_ecs_task_definition.api-gateway_definition.arn}" Result for terraform apply:
Provider version: |
same. |
Getting this on the docker image hashicorp/terraform:light as well |
From what I can see, it's deciding that the existing service doesn't exist and tries to create it which Amazon isn't allowing. It should be modifying the existing service for sure |
As it turns out in our case the problem was caused by create_before_destroy flag from the resource's lifecycle policy. After removing that the terraform apply succeeded. |
I think the same issue applies if you have the load_balancer, and likely, the target_group_arn, to the ECS service as well, as those settings can only be applied when creating the service. In my use case (just the load_balancer block, no ordered_placement_strategy block), the service gets provisioned properly, but its state never gets recorded, not even partially. So, in subsequent TF runs, it said that it wants to add a brand new ECS service, but it would error out with the same "Creation of service was not idempotent" message. |
same issue |
Ran into the same issue, removing
From the ecs resource as @oanasabau noted worked for us too. |
The best way I found is to change the name ( +1 @davidminor ) leaving however the lifecycle In this way a new service is created without interrupting the service and the old one is deposed only when the new one is active. |
I have same issue but got solved by rerunning terraform apply on the same resource. I think terraform cannot destroy and create the service at same time so it need a two steps resource apply or just remove the lifecycle would also solve the issue. |
@bkate i had the same issue, terraform deleted the service but after launched this error: i rerunning the apply and that works fine |
I'm running into the same problem, but I'm not getting a description of the error like others are, I've also tried enabling debug via
Edit: found the solution to my problem, the name in my task definition did not match the |
I ran into this today and I'm wondering if it could be side-stepped with support for
|
Ran into it today. Indeed, if create_before_destroy is used, we need name_prefix instead of name. Has anyone found a workaround of generating a new name? Tried to use random_id but not sure what to use for "keepers" section. |
due to this issue and missing name_prefix I tried:
and create_before_destroy in the service. This helps particulary, but is does not have the effect of beeing zero downtime deployment. The service is now beeing recreated every terraform run. |
Having the same issue here. Any good solution from the crowd so far? I think the support for zero-downtime deployment is essentially warranted |
I had this issue when I renamed a directory a terraform module (of a fargate service that was already deployed) was located in and I tried to redeploy with that new directory name. After naming it back to the previous name, destroying the service, renaming the directory again to the new name and deploying I no longer had the issue. |
Adding to the chorus suggesting that (with
As such it's my belief there is some kind of a race condition or internal state issue that is not allowing the subsequent creation of a service after the API returns from the deletion. Is there a means of injecting a manual "sleep" here? We would benefit from resolving this so our CI/CD pipeline can manage this transition rather than requiring "babysitting" through two deploys, with increased downtime. |
Update: When I check plan output it shows that # forces replacement for capacity_provider_strategy, maybe this causing the issue. # module.ecs_cluster.aws_ecs_service.service must be replaced
+/- resource "aws_ecs_service" "service_name" {
- health_check_grace_period_seconds = 0 -> null
~ iam_role = "aws-service-role" -> (known after apply)
~ id = "****" -> (known after apply)
+ launch_type = (known after apply)
name = "***"
+ platform_version = (known after apply)
- propagate_tags = "NONE" -> null
scheduling_strategy = "REPLICA"
- tags = {} -> null
~ task_definition = "***:6" -> (known after apply)
wait_for_steady_state = false
- capacity_provider_strategy { # forces replacement
- base = 0 -> null
- capacity_provider = "name_of_capacity_provider" -> null
- weight = 1 -> null
}
- deployment_controller {
- type = "ECS" -> null
}
....
} Update 2: Workaround: lifecycle {
ignore_changes = [
capacity_provider_strategy
]
} Like mentioned in here: #11351 I suspect that when Update: 3 Workaround: resource "aws_ecs_service" "service_name" {
name="service_deploy_num_1"
} Still this will lead about 20 seconds down time for your service if you running ECS with load balancer. |
worked for me |
Deleting the service and running |
I just hit this issue. My problem was that the service already existed. I don't think anything could have created the service out-of-band. The service name includes the workspace name. My best guess is that terraform somehow dropped the service from it's state without deleting the underlying resource. I was able to recover by importing the service and doing a fresh apply. |
We ran into this today. We weren't setting any lifecycle rules on the We attempted all of the solutions listed above:
I randomly got it working by just... trying again. What's interesting is the same plan/apply was run without issue in our lower environments: identical configuration, identical changeset, but those worked flawlessly. I suspect it was pure luck that it worked fine twice before, and again when it worked after just trying again. Terraform version: 1.1.4 |
We ran into issue today as well:
In our case, we have another layer (pulumi), but quite sure this is due to the terraform provider.
EDIT: the solution here was to remove the service manually on AWS side also. |
This issue was originally opened by @simoneroselli as hashicorp/terraform#16635. It was migrated here as a result of the provider split. The original body of the issue is below.
Terraform Version
v0.10.8
Hi,
terraform is failing on modify the "placement strategy" for ECS/Service resources. Since such value can only be set at Service creation time, the expected beahviour would be "destroy 1, add 1", like the terraform plan correctly reports. However on terraform apply it fails.
Fail Output
Error: Error applying plan:
1 error(s) occurred:
module.<my_module>.aws_ecs_service.<my_resource_name>: 1 error(s) occurred:
aws_ecs_service.main: InvalidParameterException: Creation of service was not idempotent.
status code: 400, request id: xxxxxxxxxxxxxxxx "..."
Expected Behavior
destroy service, add service.
Actual Behavior
Failure of terraform without modification.
Steps to Reproduce
terraform plan
Plan: 1 to add, 0 to change, 1 to destroy.
terraform apply
InvalidParameterException: Creation of service was not idempotent
The text was updated successfully, but these errors were encountered: