-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timed Out or Cancelled TaskRun Pods are Deleted #3051
Comments
/assign Will wait for confirmation on if this is expect for TaskRuns, but can pick this up if it is unintended. |
#2365 didn't change that behavior (see https://github.com/tektoncd/pipeline/pull/2365/files#diff-0239ea30db655388fa943dffd3b7a2e6L49), the cancellation feature always worked by deleting the pod. That was, at the time, the only way to do it. As @imjasonh said somewhere, we may be able to do that now using the entrypoint and some signal (what |
Yes, my misunderstanding, but now it makes sense. I now see this is what is required to actually stop the TaskRun and guess I never noticed the behavior. So I guess what could be nice here are two things:
|
Copying over from slack, #381 is when we first added this and it looks like we've been deleting the pods all along - though that surprised me too and I agree it's inconsistent with how we treat other pods (and makes it harder for dashboard folks :S) |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen I don't think we have reached to resolution for this yet ... |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
After a TaskRun is timed out or cancelled, I should still be able to view the logs of the failed or cancelled TaskRun.
Actual Behavior
It appears that the pods for TaskRuns are being deleted due to a change implemented in #2365. In the func,
failTaskRun
intaskrun.go
, the pod associated with the TaskRun is being deleted as noted here.Assuming this is not an expected feature, I would suggest checking for the failure reason before deleting the TaskRun pod:
If this is expected, it would help to document this behavior for TaskRuns as well as recommended best practices for preserving logs.
Steps to Reproduce the Problem
kubectl get pods
to find if the pod of the TaskRun has been deletedAdditional Info
N/A
Using what is in latest as of v0.15.0.
The text was updated successfully, but these errors were encountered: