-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot delete
error occurs when aws_batch_compute_environment
used in aws_batch_job_queue
is recreated
#2044
Comments
I have this same issue; the problem appears to be the disable, then delete step for both job queue and batch environment. It currently takes about 60 seconds to disable, then delete a job queue. In the current state a job_queue is in the DELETING state when the batch_compute_environment is sent it's kill signal. This leaves it in a state where it's successfully disabled, but hasn't received the delete command. WorkaroundClicking 'delete' on the batch_compute_environment and then manually removing it from the terraform state is the only workaround I have at the moment. |
@shibataka000 are you still actively working on this module? I'd like to help work out how to get this issue resolved if you are. Andy |
@andylockran It's bug reported at #1710 (comment) . I create #2322 to fix it. @mia-0032 Another bug caused it. I will create PR after #2322 merged. |
Sorry, my description and branch name was not good. #2044 has not been fixed yet. |
Is this fix available in latest aws provider release 1.5.0? I'm still facing this issue.. I'm using Terraform v0.10.5 and aws provider 1.5.0 |
@maulik887 You can update compute environment with job queue by #2347 (comment) :-) |
@shibataka000 wrote:
IIUC this workaround only work once, as the random resource is created only once. The name then become fixed and further changes to the compute_resources that would require resource recreation (i-e all the parameters marked "replacement" in https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) would return the error "Object already exists" as per #3207. You would then need to force an update of the random name (by marking it as If my understanding is correct then this issue should still be open. However at this stage it feels to me that the prefix solution from #3207 (which was also described by @radeksimko in #2347 (comment)) would fix this and other related issues in a clean way (especially given that behind the scene it all ends up with LC/ASGs anyways), so maybe it should be marked as duplicate? |
I can confirm the delete is still broken in AWS 1.5.0. |
I get this issue when i try to update the compute environment. |
This issue should be reopened, it is still broken in AWS 1.22.0. |
Same here:
While running:
|
Also failing while running:
|
Solved by tainting the job queue, took around 1,5 minutes for it to be destroyed though and almost 20min for the rest:
Although I guess it's a AWS Batch backend/API issue, not really about terraform? |
I would argue that it is a Terraform provider issue. Making entirely reasonable changes to Terraform config can leave your Terraform resources in an inconsistent state which requires manual intervention to fix. That seems like something that the provider should be dealing with. |
Yep, something seems to be off, I just got this other error message trying to destroy the CE today:
|
@radeksimko Do you mind re-opening this issue? |
A fix for me in the meantime:
|
Just ran into this myself. For me, I initially added the lifecycle rule If someone could confirm I'd be happy to take a stab at a pull request for that. |
Yes please it would indeed solve heaps of issues! However, could you create the MR against #3207 rather, as this issue is officially closed? |
I have this problem too, I'm not quite sure why this issue is closed, since it clearly is a problem and it's still there. It should definitely delete the Job Queue before deleting the Compute Environment, and the recreate them, or just modify the Compute Environment directly when possible. Having to intervene manually is a real problem, and at the same time it proves that it can work like that. It there a PR for that issue right now? |
Like @Ludonope, I just ran into this exact same issue.
|
Had the same issue and followed this comment to recover: |
Hi @radeksimko Would it be possible to re-open this bus? I have faced this problem using the latest version of terraform and AWS provider:
#2044 is still a valid fix but it is not ideal to perform this task everytime the compute environment is changed. |
The workaround I use is to just add
So if I change some attribute of the compute environment and run |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Hi there,
I found that
Cannot delete, found existing JobQueue relationship
error occurs whenaws_batch_compute_environment
used inaws_batch_job_queue
is recreated.Do you have any solutions in regards to this?
Terraform Version
Affected Resource(s)
Terraform Configuration Files
Output
plan:
apply:
Panic Output
None
Expected Behavior
aws_batch_job_queue
aws_batch_compute_environment
aws_batch_job_queue
Actual Behavior
Steps to Reproduce
terraform apply
aws_batch_compute_environment.test
.terraform apply
Important Factoids
No
References
I could not find any issues related this.
The text was updated successfully, but these errors were encountered: