diff --git a/docs/pipelines.md b/docs/pipelines.md index c1b52c58d8a..62ce5b81c6f 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -21,7 +21,7 @@ weight: 3 - [Configuring execution results at the `Pipeline` level](#configuring-execution-results-at-the-pipeline-level) - [Configuring the `Task` execution order](#configuring-the-task-execution-order) - [Adding a description](#adding-a-description) - - [Adding `Finally` to the `Pipeline` (Preview)](#adding-finally-to-the-pipeline-preview) + - [Adding `Finally` to the `Pipeline`](#adding-finally-to-the-pipeline) - [Code examples](#code-examples) ## Overview @@ -530,10 +530,7 @@ In particular: The `description` field is an optional field and can be used to provide description of the `Pipeline`. -## Adding `Finally` to the `Pipeline` (Preview) - -_Finally type is available in the `Pipeline` but functionality is in progress. Final tasks are can be specified and -are validated but not executed yet._ +## Adding `Finally` to the `Pipeline` You can specify a list of one or more final tasks under `finally` section. Final tasks are guaranteed to be executed in parallel after all `PipelineTasks` under `tasks` have completed regardless of success or error. Final tasks are very @@ -553,10 +550,186 @@ spec: Name: cleanup ``` -_[PR #2661](https://github.com/tektoncd/pipeline/pull/2661) is implementing this new functionality by adding support to enable -final tasks along with workspaces and parameters. `PipelineRun` status is being updated to include execution status of -final tasks i.e. `PipelineRun` status is set to success or failure depending on execution of `PipelineTasks`, this status -remains same when all final tasks finishes successfully but is set to failure if any of the final tasks fail._ +### Specifying `Workspaces` in Final Tasks + +Finally tasks can specify [workspaces](workspaces.md) which `PipelineTasks` might have utilized +e.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more +`Workspaces` in the `workspaces` field for the final tasks similar to `tasks`. + +```yaml +spec: + resources: + - name: app-git + type: git + workspaces: + - name: shared-workspace + tasks: + - name: clone-app-source + taskRef: + name: clone-app-repo-to-workspace + workspaces: + - name: shared-workspace + workspace: shared-workspace + resources: + inputs: + - name: app-git + resource: app-git + finally: + - name: cleanup-workspace + taskRef: + name: cleanup-workspace + workspaces: + - name: shared-workspace + workspace: shared-workspace +``` + +### Specifying `Parameters` in Final Tasks + +Similar to `tasks`, you can specify [`Parameters`](tasks.md#specifying-parameters) in final tasks: + +```yaml +spec: + tasks: + - name: tests + taskRef: + Name: integration-test + finally: + - name: report-results + taskRef: + Name: report-results + params: + - name: url + value: "someURL" +``` + +### `PipelineRun` Status with `finally` + +With `finally`, `PipelineRun` status is calculated based on `PipelineTasks` under `tasks` section and final tasks. + +Without `finally`: + +| `PipelineTasks` under `tasks` | `PipelineRun` status | Reason | +| ----------------------------- | -------------------- | ------ | +| all `PipelineTasks` successful | `true` | `Succeeded` | +| one or more `PipelineTasks` [skipped](conditions.md) and rest successful | `true` | `Completed` | +| single failure of `PipelineTask` | `false` | `failed` | + +With `finally`: + +| `PipelineTasks` under `tasks` | Final Tasks | `PipelineRun` status | Reason | +| ----------------------------- | ----------- | -------------------- | ------ | +| all `PipelineTask` successful | all final tasks successful | `true` | `Succeeded` | +| all `PipelineTask` successful | one or more failure of final tasks | `false` | `Failed` | +| one or more `PipelineTask` [skipped](conditions.md) and rest successful | all final tasks successful | `true` | `Completed` | +| one or more `PipelineTask` [skipped](conditions.md) and rest successful | one or more failure of final tasks | `false` | `Failed` | +| single failure of `PipelineTask` | all final tasks successful | `false` | `failed` | +| single failure of `PipelineTask` | one or more failure of final tasks | `false` | `failed` | + +Overall, `PipelineRun` state transitioning is explained below for respective scenarios: + +* All `PipelineTask` and final tasks are successful: `Started` -> `Running` -> `Succeeded` +* At least one `PipelineTask` skipped and rest successful: `Started` -> `Running` -> `Completed` +* One `PipelineTask` failed / one or more final tasks failed: `Started` -> `Running` -> `Failed` + +Please refer to the [table](pipelineruns.md#monitoring-execution-status) under Monitoring Execution Status to learn about +what kind of events are triggered based on the `Pipelinerun` status. + +### Known Limitations + +### Specifying `Resources` in Final Tasks + +Similar to `tasks`, you can use [PipelineResources](#specifying-resources) as inputs and outputs for +final tasks in the Pipeline. The only difference here is, final tasks with an input resource can not have a `from` clause +like a `PipelineTask` from `tasks` section. For example: + +```yaml +spec: + tasks: + - name: tests + taskRef: + Name: integration-test + resources: + inputs: + - name: source + resource: tektoncd-pipeline-repo + outputs: + - name: workspace + resource: my-repo + finally: + - name: clear-workspace + taskRef: + Name: clear-workspace + resources: + inputs: + - name: workspace + resource: my-repo + from: #invalid + - tests +``` + +### Cannot configure the Final Task execution order + +It's not possible to configure or modify the execution order of the final tasks. Unlike `Tasks` in a `Pipeline`, +all final tasks run simultaneously and start executing once all `PipelineTasks` under `tasks` have settled which means +no `runAfter` can be specified in final tasks. + +### Cannot specify execution `Conditions` in Final Tasks + +`Tasks` in a `Pipeline` can be configured to run only if some conditions are satisfied using `conditions`. But the +final tasks are guaranteed to be executed after all `PipelineTasks` therefore no `conditions` can be specified in +final tasks. + +#### Cannot configure `Task` execution results with `finally` + +Final tasks can not be configured to consume `Results` of `PipelineTask` from `tasks` section i.e. the following +example is not supported right now but we are working on adding support for the same (tracked in issue +[#2557](https://github.com/tektoncd/pipeline/issues/2557)). + +```yaml +spec: + tasks: + - name: count-comments-before + taskRef: + Name: count-comments + - name: add-comment + taskRef: + Name: add-comment + - name: count-comments-after + taskRef: + Name: count-comments + finally: + - name: check-count + taskRef: + Name: check-count + params: + - name: before-count + value: $(tasks.count-comments-before.results.count) #invalid + - name: after-count + value: $(tasks.count-comments-after.results.count) #invalid +``` + +#### Cannot configure `Pipeline` result with `finally` + +Final tasks can emit `Results` but results emitted from the final tasks can not be configured in the +[Pipeline Results](#configuring-execution-results-at-the-pipeline-level). We are working on adding support for this +(tracked in issue [#2710](https://github.com/tektoncd/pipeline/issues/2710)). + +```yaml + results: + - name: comment-count-validate + value: $(finally.check-count.results.comment-count-validate) +``` + +In this example, `PipelineResults` is set to: + +``` +"pipelineResults": [ + { + "name": "comment-count-validate", + "value": "$(finally.check-count.results.comment-count-validate)" + } +], +``` ## Code examples diff --git a/examples/v1beta1/pipelineruns/pipelinerun-with-final-tasks.yaml b/examples/v1beta1/pipelineruns/pipelinerun-with-final-tasks.yaml new file mode 100644 index 00000000000..969c6aa54a3 --- /dev/null +++ b/examples/v1beta1/pipelineruns/pipelinerun-with-final-tasks.yaml @@ -0,0 +1,187 @@ +# Copied from https://github.com/tektoncd/catalog/blob/v1beta1/git/git-clone.yaml :( +# This can be deleted after we add support to refer to the remote Task in a registry (Issue #1839) or +# add support for referencing task in git directly (issue #2298) +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: git-clone-from-catalog +spec: + workspaces: + - name: output + description: The git repo will be cloned onto the volume backing this workspace + params: + - name: url + description: git url to clone + type: string + - name: revision + description: git revision to checkout (branch, tag, sha, ref…) + type: string + default: master + - name: refspec + description: (optional) git refspec to fetch before checking out revision + default: "" + - name: submodules + description: defines if the resource should initialize and fetch the submodules + type: string + default: "true" + - name: depth + description: performs a shallow clone where only the most recent commit(s) will be fetched + type: string + default: "1" + - name: sslVerify + description: defines if http.sslVerify should be set to true or false in the global git config + type: string + default: "true" + - name: subdirectory + description: subdirectory inside the "output" workspace to clone the git repo into + type: string + default: "" + - name: deleteExisting + description: clean out the contents of the repo's destination directory (if it already exists) before trying to clone the repo there + type: string + default: "false" + - name: httpProxy + description: git HTTP proxy server for non-SSL requests + type: string + default: "" + - name: httpsProxy + description: git HTTPS proxy server for SSL requests + type: string + default: "" + - name: noProxy + description: git no proxy - opt out of proxying HTTP/HTTPS requests + type: string + default: "" + results: + - name: commit + description: The precise commit SHA that was fetched by this Task + steps: + - name: clone + image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.12.1 + script: | + CHECKOUT_DIR="$(workspaces.output.path)/$(params.subdirectory)" + + cleandir() { + # Delete any existing contents of the repo directory if it exists. + # + # We don't just "rm -rf $CHECKOUT_DIR" because $CHECKOUT_DIR might be "/" + # or the root of a mounted volume. + if [[ -d "$CHECKOUT_DIR" ]] ; then + # Delete non-hidden files and directories + rm -rf "$CHECKOUT_DIR"/* + # Delete files and directories starting with . but excluding .. + rm -rf "$CHECKOUT_DIR"/.[!.]* + # Delete files and directories starting with .. plus any other character + rm -rf "$CHECKOUT_DIR"/..?* + fi + } + + if [[ "$(params.deleteExisting)" == "true" ]] ; then + cleandir + fi + + test -z "$(params.httpProxy)" || export HTTP_PROXY=$(params.httpProxy) + test -z "$(params.httpsProxy)" || export HTTPS_PROXY=$(params.httpsProxy) + test -z "$(params.noProxy)" || export NO_PROXY=$(params.noProxy) + + /ko-app/git-init \ + -url "$(params.url)" \ + -revision "$(params.revision)" \ + -refspec "$(params.refspec)" \ + -path "$CHECKOUT_DIR" \ + -sslVerify="$(params.sslVerify)" \ + -submodules="$(params.submodules)" \ + -depth "$(params.depth)" + cd "$CHECKOUT_DIR" + RESULT_SHA="$(git rev-parse HEAD | tr -d '\n')" + EXIT_CODE="$?" + if [ "$EXIT_CODE" != 0 ] + then + exit $EXIT_CODE + fi + # Make sure we don't add a trailing newline to the result! + echo -n "$RESULT_SHA" > $(results.commit.path) + +--- + +# Task to cleanup shared workspace +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: cleanup-workspace +spec: + workspaces: + # Shared workspace where git repo is cloned + - name: source + steps: + - name: check-application-dir-has-source + image: ubuntu + script: | + if [ ! -d "$(workspaces.source.path)/application/" ]; then + echo "Something went wrong and could not find application source under $(workspaces.source.path)/application/" + exit 1 + fi + - name: cleanup-workspace + image: ubuntu + script: | + rm -rf $(workspaces.source.path)/application/ + - name: verify-application-dir-has-gone + image: ubuntu + script: | + if [ -d "$(workspaces.source.path)/application/" ]; then + echo "Something went wrong cleaning up and the application source still exists under $(workspaces.source.path)/application/" + exit 1 + fi +--- + +# Pipeline to clone repo into shared workspace and cleanup the workspace after done +apiVersion: tekton.dev/v1beta1 +kind: Pipeline +metadata: + name: clone-cleanup-workspace +spec: + workspaces: + # common workspace where git repo is cloned and needs to be cleanup after done + - name: git-source + tasks: + # Clone app repo to workspace + - name: clone-app-repo + taskRef: + name: git-clone-from-catalog + params: + - name: url + value: https://github.com/tektoncd/community.git + - name: subdirectory + value: application + workspaces: + - name: output + workspace: git-source + finally: + # Cleanup workspace + - name: cleanup + taskRef: + name: cleanup-workspace + workspaces: + - name: source + workspace: git-source +--- + +# PipelineRun to execute pipeline - clone-into-workspace-and-cleanup-workspace +apiVersion: tekton.dev/v1beta1 +kind: PipelineRun +metadata: + name: write-and-cleanup-workspace +spec: + pipelineRef: + name: clone-cleanup-workspace + serviceAccountName: 'default' + workspaces: + - name: git-source + volumeClaimTemplate: + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +--- diff --git a/internal/builder/v1beta1/pipeline.go b/internal/builder/v1beta1/pipeline.go index decde6b6276..e2a1379c8da 100644 --- a/internal/builder/v1beta1/pipeline.go +++ b/internal/builder/v1beta1/pipeline.go @@ -153,6 +153,25 @@ func PipelineTask(name, taskName string, ops ...PipelineTaskOp) PipelineSpecOp { } } +// FinalTask adds a final PipelineTask, with specified name and task name, to the PipelineSpec. +// Any number of PipelineTask modifier can be passed to transform it. +func FinalPipelineTask(name, taskName string, ops ...PipelineTaskOp) PipelineSpecOp { + return func(ps *v1beta1.PipelineSpec) { + pTask := &v1beta1.PipelineTask{ + Name: name, + } + if taskName != "" { + pTask.TaskRef = &v1beta1.TaskRef{ + Name: taskName, + } + } + for _, op := range ops { + op(pTask) + } + ps.Finally = append(ps.Finally, *pTask) + } +} + // PipelineResult adds a PipelineResult, with specified name, value and description, to the PipelineSpec. func PipelineResult(name, value, description string, ops ...PipelineOp) PipelineSpecOp { return func(ps *v1beta1.PipelineSpec) { diff --git a/internal/builder/v1beta1/pipeline_test.go b/internal/builder/v1beta1/pipeline_test.go index 45fdc0b3ae2..c2637f247bf 100644 --- a/internal/builder/v1beta1/pipeline_test.go +++ b/internal/builder/v1beta1/pipeline_test.go @@ -393,3 +393,38 @@ func TestPipelineRunWithPipelineSpec(t *testing.T) { t.Fatalf("PipelineRun diff -want, +got: %s", diff) } } + +func TestPipelineRunWithFinalTask(t *testing.T) { + pipelineRun := tb.PipelineRun("pear", tb.PipelineRunNamespace("foo"), tb.PipelineRunSpec("", tb.PipelineRunPipelineSpec( + tb.PipelineTask("dag-task", "some-task"), + tb.FinalPipelineTask("final-task", "some-task")), + tb.PipelineRunServiceAccountName("sa"), + )) + + expectedPipelineRun := &v1beta1.PipelineRun{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pear", + Namespace: "foo", + }, + Spec: v1beta1.PipelineRunSpec{ + PipelineRef: nil, + PipelineSpec: &v1beta1.PipelineSpec{ + Tasks: []v1beta1.PipelineTask{{ + Name: "dag-task", + TaskRef: &v1beta1.TaskRef{Name: "some-task"}, + }}, + Finally: []v1beta1.PipelineTask{{ + Name: "final-task", + TaskRef: &v1beta1.TaskRef{Name: "some-task"}, + }}, + }, + ServiceAccountName: "sa", + Timeout: &metav1.Duration{Duration: 1 * time.Hour}, + }, + } + + if diff := cmp.Diff(expectedPipelineRun, pipelineRun); diff != "" { + t.Fatalf("PipelineRun diff -want, +got: %s", diff) + } + +} diff --git a/pkg/reconciler/pipelinerun/pipelinerun.go b/pkg/reconciler/pipelinerun/pipelinerun.go index 8899f2bcdac..2b7e3277821 100644 --- a/pkg/reconciler/pipelinerun/pipelinerun.go +++ b/pkg/reconciler/pipelinerun/pipelinerun.go @@ -293,6 +293,19 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun) err return controller.NewPermanentError(err) } + // build DAG with a list of final tasks, this DAG is used later to identify + // if a task in PipelineRunState is final task or not + // the finally section is optional and might not exist + // dfinally holds an empty Graph in the absence of finally clause + dfinally, err := dag.Build(v1beta1.PipelineTaskList(pipelineSpec.Finally)) + if err != nil { + // This Run has failed, so we need to mark it as failed and stop reconciling it + pr.Status.MarkFailed(ReasonInvalidGraph, + "PipelineRun %s's Pipeline DAG is invalid for finally clause: %s", + pr.Namespace, pr.Name, err) + return controller.NewPermanentError(err) + } + if err := pipelineSpec.Validate(ctx); err != nil { // This Run has failed, so we need to mark it as failed and stop reconciling it pr.Status.MarkFailed(ReasonFailedValidation, @@ -355,6 +368,9 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun) err // Apply parameter substitution from the PipelineRun pipelineSpec = resources.ApplyParameters(pipelineSpec, pr) + // pipelineState holds a list of pipeline tasks after resolving conditions and pipeline resources + // pipelineState also holds a taskRun for each pipeline task after the taskRun is created + // pipelineState is instantiated and updated on every reconcile cycle pipelineState, err := resources.ResolvePipelineRun(ctx, *pr, func(name string) (v1beta1.TaskInterface, error) { @@ -369,7 +385,7 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun) err func(name string) (*v1alpha1.Condition, error) { return c.conditionLister.Conditions(pr.Namespace).Get(name) }, - pipelineSpec.Tasks, providedResources, + append(pipelineSpec.Tasks, pipelineSpec.Finally...), providedResources, ) if err != nil { @@ -436,16 +452,11 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun) err return controller.NewPermanentError(err) } - // When the pipeline run is stopping, we don't schedule any new task and only - // wait for all running tasks to complete and report their status - if !pipelineState.IsStopping() { - err = c.runNextSchedulableTask(ctx, pr, d, pipelineState, as) - if err != nil { - return err - } + if err := c.runNextSchedulableTask(ctx, pr, d, dfinally, pipelineState, as); err != nil { + return err } - after := resources.GetPipelineConditionStatus(pr, pipelineState, logger, d) + after := resources.GetPipelineConditionStatus(pr, pipelineState, logger, d, dfinally) switch after.Status { case corev1.ConditionTrue: pr.Status.MarkSucceeded(after.Reason, after.Message) @@ -463,18 +474,31 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun) err // runNextSchedulableTask gets the next schedulable Tasks from the dag based on the current // pipeline run state, and starts them -func (c *Reconciler) runNextSchedulableTask(ctx context.Context, pr *v1beta1.PipelineRun, d *dag.Graph, pipelineState resources.PipelineRunState, as artifacts.ArtifactStorageInterface) error { +// after all DAG tasks are done, it's responsible for scheduling final tasks and start executing them +func (c *Reconciler) runNextSchedulableTask(ctx context.Context, pr *v1beta1.PipelineRun, d *dag.Graph, dfinally *dag.Graph, pipelineState resources.PipelineRunState, as artifacts.ArtifactStorageInterface) error { logger := logging.FromContext(ctx) recorder := controller.GetEventRecorder(ctx) - candidateTasks, err := dag.GetSchedulable(d, pipelineState.SuccessfulPipelineTaskNames()...) - if err != nil { - logger.Errorf("Error getting potential next tasks for valid pipelinerun %s: %v", pr.Name, err) - return controller.NewPermanentError(err) + var nextRprts []*resources.ResolvedPipelineRunTask + + // when pipeline run is stopping, do not schedule any new task and only + // wait for all running tasks to complete and report their status + if !pipelineState.IsStopping(d) { + // candidateTasks is initialized to DAG root nodes to start pipeline execution + // candidateTasks is derived based on successfully finished tasks and/or skipped tasks + candidateTasks, err := dag.GetSchedulable(d, pipelineState.SuccessfulOrSkippedDAGTasks(d)...) + if err != nil { + logger.Errorf("Error getting potential next tasks for valid pipelinerun %s: %v", pr.Name, err) + return controller.NewPermanentError(err) + } + // nextRprts holds a list of pipeline tasks which should be executed next + nextRprts = pipelineState.GetNextTasks(candidateTasks) } - nextRprts := pipelineState.GetNextTasks(candidateTasks) + // GetFinalTasks only returns tasks when a DAG is complete + nextRprts = append(nextRprts, pipelineState.GetFinalTasks(d, dfinally)...) + resolvedResultRefs, err := resources.ResolveResultRefs(pipelineState, nextRprts) if err != nil { logger.Infof("Failed to resolve all task params for %q with error %v", pr.Name, err) diff --git a/pkg/reconciler/pipelinerun/pipelinerun_test.go b/pkg/reconciler/pipelinerun/pipelinerun_test.go index 64906567f99..15d6ac8ef96 100644 --- a/pkg/reconciler/pipelinerun/pipelinerun_test.go +++ b/pkg/reconciler/pipelinerun/pipelinerun_test.go @@ -36,7 +36,7 @@ import ( taskrunresources "github.com/tektoncd/pipeline/pkg/reconciler/taskrun/resources" ttesting "github.com/tektoncd/pipeline/pkg/reconciler/testing" "github.com/tektoncd/pipeline/pkg/system" - test "github.com/tektoncd/pipeline/test" + "github.com/tektoncd/pipeline/test" "github.com/tektoncd/pipeline/test/diff" "github.com/tektoncd/pipeline/test/names" "go.uber.org/zap" @@ -449,7 +449,12 @@ func TestReconcile_InvalidPipelineRuns(t *testing.T) { tb.PipelineRun("pipeline-invalid-dag-graph", tb.PipelineRunNamespace("foo"), tb.PipelineRunSpec("", tb.PipelineRunPipelineSpec( tb.PipelineTask("dag-task-1", "dag-task-1", tb.RunAfter("dag-task-1")), ))), + tb.PipelineRun("pipeline-invalid-final-graph", tb.PipelineRunNamespace("foo"), tb.PipelineRunSpec("", tb.PipelineRunPipelineSpec( + tb.PipelineTask("dag-task-1", "taskName"), + tb.FinalPipelineTask("final-task-1", "taskName"), + tb.FinalPipelineTask("final-task-1", "taskName")))), } + d := test.Data{ Tasks: ts, Pipelines: ps, @@ -528,6 +533,11 @@ func TestReconcile_InvalidPipelineRuns(t *testing.T) { pipelineRun: prs[12], reason: ReasonInvalidGraph, permanentError: true, + }, { + name: "invalid-pipeline-with-invalid-final-tasks-graph", + pipelineRun: prs[13], + reason: ReasonInvalidGraph, + permanentError: true, }, } @@ -3063,3 +3073,424 @@ func TestUpdatePipelineRunStatusFromTaskRuns(t *testing.T) { }) } } + +func TestReconcilePipeline_FinalTasks(t *testing.T) { + tests := []struct { + name string + pipelineRunName string + prs []*v1beta1.PipelineRun + ps []*v1beta1.Pipeline + ts []*v1beta1.Task + trs []*v1beta1.TaskRun + expectedTaskRuns map[string]*v1beta1.PipelineRunTaskRunStatus + pipelineRunStatusUnknown bool + pipelineRunStatusFalse bool + }{{ + // pipeline run should result in error when a dag task is executed and resulted in failure but final task is executed successfully + + // pipelineRunName - "pipeline-run-dag-task-failing" + // pipelineName - "pipeline-dag-task-failing" + // pipelineTasks - "dag-task-1" and "final-task-1" + // taskRunNames - "task-run-dag-task" and "task-run-final-task" + // taskName - "hello-world" + + name: "Test 01 - Pipeline run should result in error when a dag task fails but final task is executed successfully.", + + pipelineRunName: "pipeline-run-dag-task-failing", + + prs: getPipelineRun( + "pipeline-run-dag-task-failing", + "pipeline-dag-task-failing", + corev1.ConditionFalse, + v1beta1.PipelineRunReasonFailed.String(), + "Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 0", + map[string]string{ + "dag-task-1": "task-run-dag-task", + "final-task-1": "task-run-final-task", + }, + ), + + ps: getPipeline( + "pipeline-dag-task-failing", + []tb.PipelineSpecOp{ + tb.PipelineTask("dag-task-1", "hello-world"), + tb.FinalPipelineTask("final-task-1", "hello-world"), + }, + ), + + ts: []*v1beta1.Task{tb.Task("hello-world", tb.TaskNamespace("foo"))}, + + trs: []*v1beta1.TaskRun{ + getTaskRun( + "task-run-dag-task", + "pipeline-run-dag-task-failing", + "pipeline-dag-task-failing", + "dag-task-1", + corev1.ConditionFalse, + ), + getTaskRun( + "task-run-final-task", + "pipeline-run-dag-task-failing", + "pipeline-dag-task-failing", + "final-task-1", + "", + ), + }, + + expectedTaskRuns: map[string]*v1beta1.PipelineRunTaskRunStatus{ + "task-run-dag-task": getTaskRunStatus("dag-task-1", corev1.ConditionFalse), + "task-run-final-task": getTaskRunStatus("final-task-1", ""), + }, + + pipelineRunStatusFalse: true, + }, { + + // pipeline run should result in error when a dag task is successful but the final task fails + + // pipelineRunName - "pipeline-run-with-dag-successful-but-final-failing" + // pipelineName - "pipeline-with-dag-successful-but-final-failing" + // pipelineTasks - "dag-task-1" and "final-task-1" + // taskRunNames - "task-run-dag-task" and "task-run-final-task" + // taskName - "hello-world" + + name: "Test 02 - Pipeline run should result in error when a dag task is successful but final task fails.", + + pipelineRunName: "pipeline-run-with-dag-successful-but-final-failing", + + prs: getPipelineRun( + "pipeline-run-with-dag-successful-but-final-failing", + "pipeline-with-dag-successful-but-final-failing", + corev1.ConditionFalse, + v1beta1.PipelineRunReasonFailed.String(), + "Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 0", + map[string]string{ + "dag-task-1": "task-run-dag-task", + "final-task-1": "task-run-final-task", + }, + ), + + ps: getPipeline( + "pipeline-with-dag-successful-but-final-failing", + []tb.PipelineSpecOp{ + tb.PipelineTask("dag-task-1", "hello-world"), + tb.FinalPipelineTask("final-task-1", "hello-world"), + }, + ), + + ts: []*v1beta1.Task{tb.Task("hello-world", tb.TaskNamespace("foo"))}, + + trs: []*v1beta1.TaskRun{ + getTaskRun( + "task-run-dag-task", + "pipeline-run-with-dag-successful-but-final-failing", + "pipeline-with-dag-successful-but-final-failing", + "dag-task-1", + "", + ), + getTaskRun( + "task-run-final-task", + "pipeline-run-with-dag-successful-but-final-failing", + "pipeline-with-dag-successful-but-final-failing", + "final-task-1", + corev1.ConditionFalse, + ), + }, + + expectedTaskRuns: map[string]*v1beta1.PipelineRunTaskRunStatus{ + "task-run-dag-task": getTaskRunStatus("dag-task-1", ""), + "task-run-final-task": getTaskRunStatus("final-task-1", corev1.ConditionFalse), + }, + + pipelineRunStatusFalse: true, + }, { + + // pipeline run should result in error when a dag task and final task both are executed and resulted in failure + + // pipelineRunName - "pipeline-run-with-dag-and-final-failing" + // pipelineName - "pipeline-with-dag-and-final-failing" + // pipelineTasks - "dag-task-1" and "final-task-1" + // taskRunNames - "task-run-dag-task" and "task-run-final-task" + // taskName - "hello-world" + + name: "Test 03 - Pipeline run should result in error when both dag task and final task fail.", + + pipelineRunName: "pipeline-run-with-dag-and-final-failing", + + prs: getPipelineRun( + "pipeline-run-with-dag-and-final-failing", + "pipeline-with-dag-and-final-failing", + corev1.ConditionFalse, + v1beta1.PipelineRunReasonFailed.String(), + "Tasks Completed: 2 (Failed: 2, Cancelled 0), Skipped: 0", + map[string]string{ + "dag-task-1": "task-run-dag-task", + "final-task-1": "task-run-final-task", + }, + ), + + ps: getPipeline( + "pipeline-with-dag-and-final-failing", + []tb.PipelineSpecOp{ + tb.PipelineTask("dag-task-1", "hello-world"), + tb.FinalPipelineTask("final-task-1", "hello-world"), + }, + ), + + ts: []*v1beta1.Task{tb.Task("hello-world", tb.TaskNamespace("foo"))}, + + trs: []*v1beta1.TaskRun{ + getTaskRun( + "task-run-dag-task", + "pipeline-run-with-dag-and-final-failing", + "pipeline-with-dag-and-final-failing", + "dag-task-1", + corev1.ConditionFalse, + ), + getTaskRun( + "task-run-final-task", + "pipeline-run-with-dag-and-final-failing", + "pipeline-with-dag-and-final-failing", + "final-task-1", + corev1.ConditionFalse, + ), + }, + + expectedTaskRuns: map[string]*v1beta1.PipelineRunTaskRunStatus{ + "task-run-dag-task": getTaskRunStatus("dag-task-1", corev1.ConditionFalse), + "task-run-final-task": getTaskRunStatus("final-task-1", corev1.ConditionFalse), + }, + + pipelineRunStatusFalse: true, + }, { + + // pipeline run should not schedule final tasks until dag tasks are done i.e. + // dag task 1 fails but dag task 2 is still running, pipeline run should not schedule and create task run for final task + + // pipelineRunName - "pipeline-run-with-dag-running" + // pipelineName - "pipeline-with-dag-running" + // pipelineTasks - "dag-task-1", "dag-task-2" and "final-task-1" + // taskRunNames - "task-run-dag-task-1" and "task-run-dag-task-2" - no task run for final task + // taskName - "hello-world" + + name: "Test 04 - Pipeline run should not schedule final tasks while dag tasks are still running.", + + pipelineRunName: "pipeline-run-with-dag-running", + + prs: getPipelineRun( + "pipeline-run-with-dag-running", + "pipeline-with-dag-running", + corev1.ConditionUnknown, + v1beta1.PipelineRunReasonRunning.String(), + "Tasks Completed: 1 (Failed: 1, Cancelled 0), Incomplete: 2, Skipped: 0", + map[string]string{ + "dag-task-1": "task-run-dag-task-1", + "dag-task-2": "task-run-dag-task-2", + }, + ), + + ps: getPipeline( + "pipeline-with-dag-running", + []tb.PipelineSpecOp{ + tb.PipelineTask("dag-task-1", "hello-world"), + tb.PipelineTask("dag-task-2", "hello-world"), + tb.FinalPipelineTask("final-task-1", "hello-world"), + }, + ), + + ts: []*v1beta1.Task{tb.Task("hello-world", tb.TaskNamespace("foo"))}, + + trs: []*v1beta1.TaskRun{ + getTaskRun( + "task-run-dag-task-1", + "pipeline-run-with-dag-running", + "pipeline-with-dag-running", + "dag-task-1", + corev1.ConditionFalse, + ), + getTaskRun( + "task-run-dag-task-2", + "pipeline-run-with-dag-running", + "pipeline-with-dag-running", + "dag-task-2", + corev1.ConditionUnknown, + ), + }, + + expectedTaskRuns: map[string]*v1beta1.PipelineRunTaskRunStatus{ + "task-run-dag-task-1": getTaskRunStatus("dag-task-1", corev1.ConditionFalse), + "task-run-dag-task-2": getTaskRunStatus("dag-task-2", corev1.ConditionUnknown), + }, + + pipelineRunStatusUnknown: true, + }, { + + // pipeline run should not schedule final tasks until dag tasks are done i.e. + // dag task is still running and no other dag task available to schedule, + // pipeline run should not schedule and create task run for final task + + // pipelineRunName - "pipeline-run-dag-task-running" + // pipelineName - "pipeline-dag-task-running" + // pipelineTasks - "dag-task-1" and "final-task-1" + // taskRunNames - "task-run-dag-task-1" - no task run for final task + // taskName - "hello-world" + + name: "Test 05 - Pipeline run should not schedule final tasks while dag tasks are still running and no other dag task available to schedule.", + + pipelineRunName: "pipeline-run-dag-task-running", + + prs: getPipelineRun( + "pipeline-run-dag-task-running", + "pipeline-dag-task-running", + corev1.ConditionUnknown, + v1beta1.PipelineRunReasonRunning.String(), + "Tasks Completed: 0 (Failed: 0, Cancelled 0), Incomplete: 1, Skipped: 0", + map[string]string{ + "dag-task-1": "task-run-dag-task-1", + }, + ), + + ps: getPipeline( + "pipeline-dag-task-running", + []tb.PipelineSpecOp{ + tb.PipelineTask("dag-task-1", "hello-world"), + tb.FinalPipelineTask("final-task-1", "hello-world"), + }, + ), + + ts: []*v1beta1.Task{tb.Task("hello-world", tb.TaskNamespace("foo"))}, + + trs: []*v1beta1.TaskRun{ + getTaskRun( + "task-run-dag-task-1", + "pipeline-run-dag-task-running", + "pipeline-dag-task-running", + "dag-task-1", + corev1.ConditionUnknown, + ), + }, + + expectedTaskRuns: map[string]*v1beta1.PipelineRunTaskRunStatus{ + "task-run-dag-task-1": getTaskRunStatus("dag-task-1", corev1.ConditionUnknown), + }, + + pipelineRunStatusUnknown: true, + }} + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + d := test.Data{ + PipelineRuns: tt.prs, + Pipelines: tt.ps, + Tasks: tt.ts, + TaskRuns: tt.trs, + } + + testAssets, cancel := getPipelineRunController(t, d) + defer cancel() + c := testAssets.Controller + clients := testAssets.Clients + + if err := c.Reconciler.Reconcile(context.Background(), "foo/"+tt.pipelineRunName); err != nil { + t.Fatalf("Error reconciling for %s: %s", tt.name, err) + } + + actual := clients.Pipeline.Actions()[1].(ktesting.UpdateAction).GetObject().(*v1beta1.PipelineRun) + if actual == nil { + t.Errorf("Expected a PipelineRun to be updated, but it wasn't for %s", tt.name) + } + + actions := clients.Pipeline.Actions() + for _, action := range actions { + if action != nil { + resource := action.GetResource().Resource + if resource == "taskruns" { + t.Fatalf("Expected client to not have created a TaskRun for the PipelineRun, but it did for %s", tt.name) + } + } + } + + // Check that the PipelineRun was reconciled correctly + reconciledRun, err := clients.Pipeline.TektonV1beta1().PipelineRuns("foo").Get(tt.pipelineRunName, metav1.GetOptions{}) + if err != nil { + t.Fatalf("Somehow had error getting completed reconciled run out of fake client for %s: %s", tt.name, err) + } + + if tt.pipelineRunStatusFalse { + // This PipelineRun should still be failed and the status should reflect that + if !reconciledRun.Status.GetCondition(apis.ConditionSucceeded).IsFalse() { + t.Errorf("Expected PipelineRun status to be failed, but was %v for %s", + reconciledRun.Status.GetCondition(apis.ConditionSucceeded), tt.name) + } + } else if tt.pipelineRunStatusUnknown { + // This PipelineRun should still be running and the status should reflect that + if !reconciledRun.Status.GetCondition(apis.ConditionSucceeded).IsUnknown() { + t.Errorf("Expected PipelineRun status to be unknown (running), but was %v for %s", + reconciledRun.Status.GetCondition(apis.ConditionSucceeded), tt.name) + } + } + + if d := cmp.Diff(reconciledRun.Status.TaskRuns, tt.expectedTaskRuns); d != "" { + t.Fatalf("Expected PipelineRunTaskRun status to match TaskRun(s) status, but got a mismatch for %s: %s", tt.name, d) + } + + }) + } +} + +func getPipelineRun(pr, p string, status corev1.ConditionStatus, reason string, m string, tr map[string]string) []*v1beta1.PipelineRun { + var op []tb.PipelineRunStatusOp + for k, v := range tr { + op = append(op, tb.PipelineRunTaskRunsStatus(v, + &v1beta1.PipelineRunTaskRunStatus{PipelineTaskName: k, Status: &v1beta1.TaskRunStatus{}}), + ) + } + op = append(op, tb.PipelineRunStatusCondition(apis.Condition{ + Type: apis.ConditionSucceeded, + Status: status, + Reason: reason, + Message: m, + })) + prs := []*v1beta1.PipelineRun{ + tb.PipelineRun(pr, + tb.PipelineRunNamespace("foo"), + tb.PipelineRunSpec(p, tb.PipelineRunServiceAccountName("test-sa")), + tb.PipelineRunStatus(op...), + ), + } + return prs +} + +func getPipeline(p string, t []tb.PipelineSpecOp) []*v1beta1.Pipeline { + ps := []*v1beta1.Pipeline{tb.Pipeline(p, tb.PipelineNamespace("foo"), tb.PipelineSpec(t...))} + return ps +} + +func getTaskRun(tr, pr, p, t string, status corev1.ConditionStatus) *v1beta1.TaskRun { + return tb.TaskRun(tr, + tb.TaskRunNamespace("foo"), + tb.TaskRunOwnerReference("pipelineRun", pr), + tb.TaskRunLabel(pipeline.GroupName+pipeline.PipelineLabelKey, p), + tb.TaskRunLabel(pipeline.GroupName+pipeline.PipelineRunLabelKey, pr), + tb.TaskRunLabel(pipeline.GroupName+pipeline.PipelineTaskLabelKey, t), + tb.TaskRunSpec(tb.TaskRunTaskRef(t)), + tb.TaskRunStatus( + tb.StatusCondition(apis.Condition{ + Type: apis.ConditionSucceeded, + Status: status, + }), + ), + ) +} + +func getTaskRunStatus(t string, status corev1.ConditionStatus) *v1beta1.PipelineRunTaskRunStatus { + return &v1beta1.PipelineRunTaskRunStatus{ + PipelineTaskName: t, + Status: &v1beta1.TaskRunStatus{ + Status: duckv1beta1.Status{ + Conditions: []apis.Condition{ + {Type: apis.ConditionSucceeded, Status: status}, + }, + }, + }, + } +} diff --git a/pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go b/pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go index 14a0dad982f..575c9fc3f52 100644 --- a/pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go +++ b/pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go @@ -150,6 +150,8 @@ func (state PipelineRunState) ToMap() map[string]*ResolvedPipelineRunTask { return m } +// IsDone returns true when all pipeline tasks have respective taskRun created and +// that taskRun has either succeeded or failed after all possible retry attempts func (state PipelineRunState) IsDone() (isDone bool) { isDone = true for _, t := range state { @@ -175,21 +177,24 @@ func (state PipelineRunState) IsBeforeFirstTaskRun() bool { } // IsStopping returns true if the PipelineRun won't be scheduling any new Task because -// at least one task already failed or was cancelled -func (state PipelineRunState) IsStopping() bool { +// at least one task already failed or was cancelled in the specified dag +func (state PipelineRunState) IsStopping(d *dag.Graph) bool { for _, t := range state { - if t.IsCancelled() { - return true - } - if t.IsFailure() { - return true + if _, ok := d.Nodes[t.PipelineTask.Name]; ok { + if t.IsCancelled() { + return true + } + if t.IsFailure() { + return true + } } } return false } -// GetNextTasks will return the next ResolvedPipelineRunTasks to execute, which are the ones in the -// list of candidateTasks which aren't yet indicated in state to be running. +// GetNextTasks returns a list of tasks which should be executed next i.e. +// a list of tasks from candidateTasks which aren't yet indicated in state to be running and +// a list of cancelled/failed tasks from candidateTasks which haven't exhausted their retries func (state PipelineRunState) GetNextTasks(candidateTasks map[string]struct{}) []*ResolvedPipelineRunTask { tasks := []*ResolvedPipelineRunTask{} for _, t := range state { @@ -210,19 +215,81 @@ func (state PipelineRunState) GetNextTasks(candidateTasks map[string]struct{}) [ return tasks } -// SuccessfulPipelineTaskNames returns a list of the names of all of the PipelineTasks in state -// which have successfully completed. -func (state PipelineRunState) SuccessfulPipelineTaskNames() []string { - done := []string{} +// SuccessfulOrSkippedDAGTasks returns a list of the names of all of the PipelineTasks in state +// which have successfully completed or skipped +func (state PipelineRunState) SuccessfulOrSkippedDAGTasks(d *dag.Graph) []string { + tasks := []string{} + stateMap := state.ToMap() for _, t := range state { - if t.TaskRun != nil { - c := t.TaskRun.Status.GetCondition(apis.ConditionSucceeded) - if c.IsTrue() { - done = append(done, t.PipelineTask.Name) + if _, ok := d.Nodes[t.PipelineTask.Name]; ok { + if t.IsSuccessful() || isSkipped(t, stateMap, d) { + tasks = append(tasks, t.PipelineTask.Name) } } } - return done + return tasks +} + +// isDAGTasksStopped returns true if any of the pipeline task has failed +// and none of the pipeline task are still running +func (state PipelineRunState) isDAGTasksStopped(d *dag.Graph) bool { + failed := false + for _, t := range state { + if t.IsFailure() { + failed = true + continue + } + if t.IsStarted() && !t.IsDone() { + failed = false + break + } + } + return failed +} + +// checkTasksDone returns true if all tasks from the specified graph are finished executing +// a task is considered done if it has failed/succeeded/skipped +func (state PipelineRunState) checkTasksDone(d *dag.Graph) (isDone bool) { + isDone = true + stateMap := state.ToMap() + for _, t := range state { + if _, ok := d.Nodes[t.PipelineTask.Name]; ok { + if t.TaskRun == nil { + // this task might have skipped if taskRun is nil + // continue and ignore if this task was skipped + // skipped task is considered part of done + if isSkipped(t, stateMap, d) { + continue + } + return false + } + isDone = isDone && t.IsDone() + if !isDone { + return + } + } + } + return +} + +// GetFinalTasks returns a list of final tasks without any taskRun associated with it +// GetFinalTasks returns final tasks only when all DAG tasks have finished executing successfully or skipped or +// any one DAG task resulted in failure +func (state PipelineRunState) GetFinalTasks(d *dag.Graph, dfinally *dag.Graph) []*ResolvedPipelineRunTask { + tasks := []*ResolvedPipelineRunTask{} + finalCandidates := map[string]struct{}{} + // check either pipeline has finished executing all DAG pipelineTasks + // or any one of the DAG pipelineTask has failed + if state.isDAGTasksStopped(d) || state.checkTasksDone(d) { + // return list of tasks with all final tasks + for _, t := range state { + if _, ok := dfinally.Nodes[t.PipelineTask.Name]; ok && !t.IsSuccessful() { + finalCandidates[t.PipelineTask.Name] = struct{}{} + } + } + tasks = state.GetNextTasks(finalCandidates) + } + return tasks } // GetTaskRun is a function that will retrieve the TaskRun name. @@ -414,7 +481,7 @@ func GetTaskRunName(taskRunsStatus map[string]*v1beta1.PipelineRunTaskRunStatus, // GetPipelineConditionStatus will return the Condition that the PipelineRun prName should be // updated with, based on the status of the TaskRuns in state. -func GetPipelineConditionStatus(pr *v1beta1.PipelineRun, state PipelineRunState, logger *zap.SugaredLogger, dag *dag.Graph) *apis.Condition { +func GetPipelineConditionStatus(pr *v1beta1.PipelineRun, state PipelineRunState, logger *zap.SugaredLogger, dag *dag.Graph, dfinally *dag.Graph) *apis.Condition { // We have 4 different states here: // 1. Timed out -> Failed // 2. All tasks are done and at least one has failed or has been cancelled -> Failed @@ -436,7 +503,7 @@ func GetPipelineConditionStatus(pr *v1beta1.PipelineRun, state PipelineRunState, cancelledTasks := int(0) reason := v1beta1.PipelineRunReasonSuccessful.String() stateAsMap := state.ToMap() - isStopping := state.IsStopping() + isStopping := state.IsStopping(dag) // Check to see if all tasks are success or skipped // @@ -457,8 +524,11 @@ func GetPipelineConditionStatus(pr *v1beta1.PipelineRun, state PipelineRunState, // If the pipeline is in stopping mode, all tasks that are not running // already will be skipped. Otherwise these tasks end up in the // incomplete count. - skipTasks++ - withStatusTasks = append(withStatusTasks, rprt.PipelineTask.Name) + // this should never be the case for final task + if _, ok := dag.Nodes[rprt.PipelineTask.Name]; ok { + skipTasks++ + withStatusTasks = append(withStatusTasks, rprt.PipelineTask.Name) + } case rprt.IsSuccessful(): withStatusTasks = append(withStatusTasks, rprt.PipelineTask.Name) case isSkipped(rprt, stateAsMap, dag): @@ -496,9 +566,11 @@ func GetPipelineConditionStatus(pr *v1beta1.PipelineRun, state PipelineRunState, } } - // Hasn't timed out; not all tasks have finished.... - // Must keep running then.... - if failedTasks > 0 || cancelledTasks > 0 { + // Hasn't timed out; not all tasks have finished.... Must keep running then.... + // transition pipeline into stopping state when one of the tasks(dag/final) cancelled or one of the dag tasks failed + // for a pipeline with final tasks, single dag task failure does not transition to interim stopping state + // pipeline stays in running state until all final tasks are done before transitioning to failed state + if cancelledTasks > 0 || (failedTasks > 0 && state.checkTasksDone(dfinally)) { reason = v1beta1.PipelineRunReasonStopping.String() } else { reason = v1beta1.PipelineRunReasonRunning.String() @@ -531,11 +603,13 @@ func isSkipped(rprt *ResolvedPipelineRunTask, stateMap map[string]*ResolvedPipel // Recursively look at parent tasks to see if they have been skipped, // if any of the parents have been skipped, skip as well - node := d.Nodes[rprt.PipelineTask.Name] - for _, p := range node.Prev { - skip := isSkipped(stateMap[p.Task.HashKey()], stateMap, d) - if skip { - return true + // continue if the task does not belong to the specified Graph + if node, ok := d.Nodes[rprt.PipelineTask.Name]; ok { + for _, p := range node.Prev { + skip := isSkipped(stateMap[p.Task.HashKey()], stateMap, d) + if skip { + return true + } } } return false diff --git a/pkg/reconciler/pipelinerun/resources/pipelinerunresolution_test.go b/pkg/reconciler/pipelinerun/resources/pipelinerunresolution_test.go index f93c3b668e7..34089f45786 100644 --- a/pkg/reconciler/pipelinerun/resources/pipelinerunresolution_test.go +++ b/pkg/reconciler/pipelinerun/resources/pipelinerunresolution_test.go @@ -161,7 +161,7 @@ func makeFailed(tr v1beta1.TaskRun) *v1beta1.TaskRun { } func withCancelled(tr *v1beta1.TaskRun) *v1beta1.TaskRun { - tr.Status.Conditions[0].Reason = "TaskRunCancelled" + tr.Status.Conditions[0].Reason = v1beta1.TaskRunSpecStatusCancelled return tr } @@ -1177,7 +1177,7 @@ func TestIsSkipped(t *testing.T) { } } -func TestSuccessfulPipelineTaskNames(t *testing.T) { +func TestPipelineRunState_SuccessfulOrSkippedDAGTasks(t *testing.T) { tcs := []struct { name string state PipelineRunState @@ -1193,7 +1193,7 @@ func TestSuccessfulPipelineTaskNames(t *testing.T) { }, { name: "one-task-finished", state: oneFinishedState, - expectedNames: []string{"mytask1"}, + expectedNames: []string{pts[0].Name}, }, { name: "one-task-failed", state: oneFailedState, @@ -1201,11 +1201,37 @@ func TestSuccessfulPipelineTaskNames(t *testing.T) { }, { name: "all-finished", state: allFinishedState, - expectedNames: []string{"mytask1", "mytask2"}, + expectedNames: []string{pts[0].Name, pts[1].Name}, + }, { + name: "conditional task not skipped as the condition execution was successful", + state: conditionCheckSuccessNoTaskStartedState, + expectedNames: []string{}, + }, { + name: "conditional task not skipped as the condition has not started executing yet", + state: conditionCheckStartedState, + expectedNames: []string{}, + }, { + name: "conditional task skipped as the condition execution resulted in failure", + state: conditionCheckFailedWithNoOtherTasksState, + expectedNames: []string{pts[5].Name}, + }, { + name: "conditional task skipped as the condition execution resulted in failure but the other pipeline task" + + "not skipped since it finished execution successfully", + state: conditionCheckFailedWithOthersPassedState, + expectedNames: []string{pts[5].Name, pts[0].Name}, + }, { + name: "conditional task skipped as the condition execution resulted in failure but the other pipeline task" + + "not skipped since it failed", + state: conditionCheckFailedWithOthersFailedState, + expectedNames: []string{pts[5].Name}, }} for _, tc := range tcs { t.Run(tc.name, func(t *testing.T) { - names := tc.state.SuccessfulPipelineTaskNames() + dag, err := DagFromState(tc.state) + if err != nil { + t.Fatalf("Unexpected error while buildig DAG for state %v: %v", tc.state, err) + } + names := tc.state.SuccessfulOrSkippedDAGTasks(dag) if d := cmp.Diff(names, tc.expectedNames); d != "" { t.Errorf("Expected to get completed names %v but got something different %s", tc.expectedNames, diff.PrintWantGot(d)) } @@ -1237,6 +1263,20 @@ func TestGetPipelineConditionStatus(t *testing.T) { PipelineTask: &pts[4], TaskRunName: "pipelinerun-mytask1", TaskRun: withCancelled(makeFailed(trs[0])), + }} + + var cancelledTask = PipelineRunState{{ + PipelineTask: &pts[3], // 1 retry needed + TaskRunName: "pipelinerun-mytask1", + TaskRun: &v1beta1.TaskRun{ + Status: v1beta1.TaskRunStatus{ + Status: duckv1beta1.Status{Conditions: []apis.Condition{{ + Type: apis.ConditionSucceeded, + Status: corev1.ConditionFalse, + Reason: v1beta1.TaskRunSpecStatusCancelled, + }}}, + }, + }, ResolvedTaskResources: &resources.ResolvedTaskResources{ TaskSpec: &task.Spec, }, @@ -1259,18 +1299,6 @@ func TestGetPipelineConditionStatus(t *testing.T) { TaskRunName: "failedTaskRun", // this failed PipelineTask: &pts[0], TaskRun: makeFailed(trs[0]), - }, { - TaskRunName: "skippedTaskRun", // skipped because parent failed - PipelineTask: &pts[7], // mytask8 runAfter mytask1 (failed), mytask6 (succeeded) - TaskRun: nil, - }, { - TaskRunName: "anothertaskrun", // this failed - PipelineTask: &pts[1], - TaskRun: makeFailed(trs[1]), - }, { - TaskRunName: "taskrun", // this was cancelled - PipelineTask: &pts[4], - TaskRun: withCancelled(makeFailed(trs[0])), }} var taskNotRunningWithSuccesfulParentsOneFailed = PipelineRunState{{ @@ -1416,10 +1444,10 @@ func TestGetPipelineConditionStatus(t *testing.T) { expectedReason: v1beta1.PipelineRunReasonStopping.String(), expectedStatus: corev1.ConditionUnknown, expectedSucceeded: 1, - expectedFailed: 2, + expectedFailed: 1, expectedIncomplete: 1, - expectedCancelled: 1, - expectedSkipped: 1, + expectedCancelled: 0, + expectedSkipped: 0, }, { name: "task not started with passed parent; one failed", state: taskNotRunningWithSuccesfulParentsOneFailed, @@ -1428,15 +1456,139 @@ func TestGetPipelineConditionStatus(t *testing.T) { expectedSucceeded: 1, expectedFailed: 1, expectedSkipped: 1, + }, { + name: "task with grand parents; one not run yet", + state: taskWithGrandParentsOneNotRunState, + expectedStatus: corev1.ConditionUnknown, + expectedReason: v1beta1.PipelineRunReasonRunning.String(), + expectedSucceeded: 1, + expectedIncomplete: 3, + }, { + name: "cancelled task should result in cancelled pipeline", + state: cancelledTask, + expectedStatus: corev1.ConditionFalse, + expectedReason: v1beta1.PipelineRunReasonCancelled.String(), + expectedCancelled: 1, }} for _, tc := range tcs { t.Run(tc.name, func(t *testing.T) { pr := tb.PipelineRun("somepipelinerun") - dag, err := DagFromState(tc.state) + d, err := DagFromState(tc.state) if err != nil { t.Fatalf("Unexpected error while buildig DAG for state %v: %v", tc.state, err) } - c := GetPipelineConditionStatus(pr, tc.state, zap.NewNop().Sugar(), dag) + c := GetPipelineConditionStatus(pr, tc.state, zap.NewNop().Sugar(), d, &dag.Graph{}) + wantCondition := &apis.Condition{ + Type: apis.ConditionSucceeded, + Status: tc.expectedStatus, + Reason: tc.expectedReason, + Message: getExpectedMessage(tc.expectedStatus, tc.expectedSucceeded, + tc.expectedIncomplete, tc.expectedSkipped, tc.expectedFailed, tc.expectedCancelled), + } + if d := cmp.Diff(wantCondition, c); d != "" { + t.Fatalf("Mismatch in condition %s", diff.PrintWantGot(d)) + } + }) + } +} + +func TestGetPipelineConditionStatus_WithFinalTasks(t *testing.T) { + + // pipeline state with one DAG successful, one final task failed + dagSucceededFinalFailed := PipelineRunState{{ + TaskRunName: "task0taskrun", + PipelineTask: &pts[0], + TaskRun: makeSucceeded(trs[0]), + }, { + TaskRunName: "failedTaskRun", + PipelineTask: &pts[1], + TaskRun: makeFailed(trs[0]), + }} + + // pipeline state with one DAG failed, no final started + dagFailedFinalNotStarted := PipelineRunState{{ + TaskRunName: "task0taskrun", + PipelineTask: &pts[0], + TaskRun: makeFailed(trs[0]), + }, { + TaskRunName: "notRunningTaskRun", + PipelineTask: &pts[1], + TaskRun: nil, + }} + + // pipeline state with one DAG failed, one final task failed + dagFailedFinalFailed := PipelineRunState{{ + TaskRunName: "task0taskrun", + PipelineTask: &pts[0], + TaskRun: makeFailed(trs[0]), + }, { + TaskRunName: "failedTaskRun", + PipelineTask: &pts[1], + TaskRun: makeFailed(trs[0]), + }} + + tcs := []struct { + name string + state PipelineRunState + dagTasks []v1beta1.PipelineTask + finalTasks []v1beta1.PipelineTask + expectedStatus corev1.ConditionStatus + expectedReason string + expectedSucceeded int + expectedIncomplete int + expectedSkipped int + expectedFailed int + expectedCancelled int + }{{ + name: "pipeline with one successful DAG task and failed final task", + state: dagSucceededFinalFailed, + dagTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedStatus: corev1.ConditionFalse, + expectedReason: v1beta1.PipelineRunReasonFailed.String(), + expectedSucceeded: 1, + expectedIncomplete: 0, + expectedSkipped: 0, + expectedFailed: 1, + expectedCancelled: 0, + }, { + name: "pipeline with one failed DAG task and not started final task", + state: dagFailedFinalNotStarted, + dagTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedStatus: corev1.ConditionUnknown, + expectedReason: v1beta1.PipelineRunReasonRunning.String(), + expectedSucceeded: 0, + expectedIncomplete: 1, + expectedSkipped: 0, + expectedFailed: 1, + expectedCancelled: 0, + }, { + name: "pipeline with one failed DAG task and failed final task", + state: dagFailedFinalFailed, + dagTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedStatus: corev1.ConditionFalse, + expectedReason: v1beta1.PipelineRunReasonFailed.String(), + expectedSucceeded: 0, + expectedIncomplete: 0, + expectedSkipped: 0, + expectedFailed: 2, + expectedCancelled: 0, + }} + + for _, tc := range tcs { + t.Run(tc.name, func(t *testing.T) { + pr := tb.PipelineRun("pipelinerun-final-tasks") + d, err := dag.Build(v1beta1.PipelineTaskList(tc.dagTasks)) + if err != nil { + t.Fatalf("Unexpected error while buildig graph for DAG tasks %v: %v", tc.dagTasks, err) + } + df, err := dag.Build(v1beta1.PipelineTaskList(tc.finalTasks)) + if err != nil { + t.Fatalf("Unexpected error while buildig graph for final tasks %v: %v", tc.finalTasks, err) + } + c := GetPipelineConditionStatus(pr, tc.state, zap.NewNop().Sugar(), d, df) wantCondition := &apis.Condition{ Type: apis.ConditionSucceeded, Status: tc.expectedStatus, @@ -1468,7 +1620,7 @@ func TestGetPipelineConditionStatus_PipelineTimeouts(t *testing.T) { }, }, } - c := GetPipelineConditionStatus(pr, oneFinishedState, zap.NewNop().Sugar(), d) + c := GetPipelineConditionStatus(pr, oneFinishedState, zap.NewNop().Sugar(), d, &dag.Graph{}) if c.Status != corev1.ConditionFalse && c.Reason != v1beta1.PipelineRunReasonTimedOut.String() { t.Fatalf("Expected to get status %s but got %s for state %v", corev1.ConditionFalse, c.Status, oneFinishedState) } @@ -2352,3 +2504,168 @@ func TestIsBeforeFirstTaskRun_WithStartedTask(t *testing.T) { t.Fatalf("Expected state to be after first taskrun") } } + +func TestPipelineRunState_GetFinalTasks(t *testing.T) { + tcs := []struct { + name string + desc string + state PipelineRunState + DAGTasks []v1beta1.PipelineTask + finalTasks []v1beta1.PipelineTask + expectedFinalTasks []*ResolvedPipelineRunTask + }{{ + // tasks: [ mytask1, mytask2] + // none finally + name: "01 - DAG tasks done, no final tasks", + desc: "DAG tasks (mytask1 and mytask2) finished successfully -" + + " do not schedule final tasks since pipeline didnt have any", + state: oneStartedState, + DAGTasks: []v1beta1.PipelineTask{pts[0], pts[1]}, + finalTasks: []v1beta1.PipelineTask{}, + expectedFinalTasks: []*ResolvedPipelineRunTask{}, + }, { + // tasks: [ mytask1] + // finally: [mytask2] + name: "02 - DAG task not started, no final tasks", + desc: "DAG tasks (mytask1) not started yet - do not schedule final tasks (mytask2)", + state: noneStartedState, + DAGTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{}, + }, { + // tasks: [ mytask1] + // finally: [mytask2] + name: "03 - DAG task not finished, no final tasks", + desc: "DAG tasks (mytask1) started but not finished - do not schedule final tasks (mytask2)", + state: oneStartedState, + DAGTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{}, + }, { + // tasks: [ mytask1] + // finally: [mytask2] + name: "04 - DAG task done, return final tasks", + desc: "DAG tasks (mytask1) done - schedule final tasks (mytask2)", + state: oneFinishedState, + DAGTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{oneFinishedState[1]}, + }, { + // tasks: [ mytask1] + // finally: [mytask2] + name: "05 - DAG task failed, return final tasks", + desc: "DAG task (mytask1) failed - schedule final tasks (mytask2)", + state: oneFailedState, + DAGTasks: []v1beta1.PipelineTask{pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{oneFinishedState[1]}, + }, { + // tasks: [ mytask6 with condition] + // finally: [mytask2] + name: "06 - DAG task condition started, no final tasks", + desc: "DAG task (mytask6) condition started - do not schedule final tasks (mytask1)", + state: append(conditionCheckStartedState, noneStartedState[0]), + DAGTasks: []v1beta1.PipelineTask{pts[5]}, + finalTasks: []v1beta1.PipelineTask{pts[0]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{}, + }, { + // tasks: [ mytask6 with condition] + // finally: [mytask2] + name: "07 - DAG task condition done, no final tasks", + desc: "DAG task (mytask6) condition finished, mytask6 not started - do not schedule final tasks (mytask2)", + state: append(conditionCheckSuccessNoTaskStartedState, noneStartedState[0]), + DAGTasks: []v1beta1.PipelineTask{pts[5]}, + finalTasks: []v1beta1.PipelineTask{pts[0]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{}, + }, { + // tasks: [ mytask6 with condition] + // finally: [mytask2] + name: "08 - DAG task skipped, return final tasks", + desc: "DAG task (mytask6) condition failed - schedule final tasks (mytask2) ", + state: append(conditionCheckFailedWithNoOtherTasksState, noneStartedState[0]), + DAGTasks: []v1beta1.PipelineTask{pts[5]}, + finalTasks: []v1beta1.PipelineTask{pts[0]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[0]}, + }, { + // tasks: [ mytask1, mytask6 with condition] + // finally: [mytask2] + name: "09 - DAG task succeeded/skipped, return final tasks ", + desc: "DAG task (mytask1) finished, mytask6 condition failed - schedule final tasks (mytask2)", + state: append(conditionCheckFailedWithOthersPassedState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[5], pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[1]}, + }, { + // tasks: [ mytask1, mytask6 with condition] + // finally: [mytask2] + name: "10 - DAG task failed/skipped, return final tasks", + desc: "DAG task (mytask1) failed, mytask6 condition failed - schedule final tasks (mytask2)", + state: append(conditionCheckFailedWithOthersFailedState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[5], pts[0]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[1]}, + }, { + // tasks: [ mytask6 with condition, mytask7 runAfter mytask6] + // finally: [mytask2] + name: "11 - DAG task skipped, return final tasks", + desc: "DAG task (mytask6) condition failed, mytask6 and mytask7 skipped - schedule final tasks (mytask2)", + state: append(taskWithParentSkippedState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[5], pts[6]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[1]}, + }, { + // tasks: [ mytask1, mytask6 with condition, mytask8 runAfter mytask6] + // finally: [mytask2] + name: "12 - DAG task succeeded/skipped, return final tasks", + desc: "DAG task (mytask1) finished - DAG task (mytask6) condition failed, mytask6 and mytask8 skipped - schedule final tasks (mytask2)", + state: append(taskWithMultipleParentsSkippedState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[0], pts[5], pts[7]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[1]}, + }, { + // tasks: [ mytask1, mytask6 with condition, mytask8 runAfter mytask6, mytask9 runAfter mytask1 and mytask6] + // finally: [mytask2] + name: "13 - DAG task succeeded/skipped - return final tasks", + desc: "DAG task (mytask1) finished - DAG task (mytask6) condition failed, mytask6, mytask8, and mytask9 skipped" + + "- schedule final tasks (mytask2)", + state: append(taskWithGrandParentSkippedState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[0], pts[5], pts[7], pts[8]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[1]}, + }, { + //tasks: [ mytask1, mytask6 with condition, mytask8 runAfter mytask6, mytask9 runAfter mytask1 and mytask6] + //finally: [mytask2] + name: "14 - DAG task succeeded, skipped - return final tasks", + desc: "DAG task (mytask1) finished - DAG task (mytask6) failed - mytask8 and mytask9 skipped" + + "- schedule final tasks (mytask2)", + state: append(taskWithGrandParentsOneFailedState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[0], pts[5], pts[7], pts[8]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{noneStartedState[1]}, + }, { + //tasks: [ mytask1, mytask6 with condition, mytask8 runAfter mytask6, mytask9 runAfter mytask1 and mytask6] + //finally: [mytask2] + name: "15 - DAG task succeeded/started - no final tasks", + desc: "DAG task (mytask1) finished - DAG task (mytask6) started - do no schedule final tasks", + state: append(taskWithGrandParentsOneNotRunState, noneStartedState[1]), + DAGTasks: []v1beta1.PipelineTask{pts[0], pts[5], pts[7], pts[8]}, + finalTasks: []v1beta1.PipelineTask{pts[1]}, + expectedFinalTasks: []*ResolvedPipelineRunTask{}, + }} + for _, tc := range tcs { + dagGraph, err := dag.Build(v1beta1.PipelineTaskList(tc.DAGTasks)) + if err != nil { + t.Fatalf("Unexpected error while buildig DAG for pipelineTasks %v: %v", tc.DAGTasks, err) + } + finalGraph, err := dag.Build(v1beta1.PipelineTaskList(tc.finalTasks)) + if err != nil { + t.Fatalf("Unexpected error while buildig DAG for final pipelineTasks %v: %v", tc.finalTasks, err) + } + t.Run(tc.name, func(t *testing.T) { + next := tc.state.GetFinalTasks(dagGraph, finalGraph) + if d := cmp.Diff(tc.expectedFinalTasks, next); d != "" { + t.Errorf("Didn't get expected final Tasks for %s (%s): %s", tc.name, tc.desc, diff.PrintWantGot(d)) + } + }) + } +} diff --git a/test/pipelinefinally_test.go b/test/pipelinefinally_test.go new file mode 100644 index 00000000000..a9ae68ea07c --- /dev/null +++ b/test/pipelinefinally_test.go @@ -0,0 +1,287 @@ +// +build e2e + +/* +Copyright 2019 The Tekton Authors +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package test + +import ( + "strings" + "testing" + + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1" + + "github.com/tektoncd/pipeline/pkg/reconciler/pipelinerun/resources" + + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "knative.dev/pkg/apis" + duckv1beta1 "knative.dev/pkg/apis/duck/v1beta1" + knativetest "knative.dev/pkg/test" +) + +func TestPipelineLevelFinally_OneDAGTaskFailed_Failure(t *testing.T) { + c, namespace := setup(t) + knativetest.CleanupOnInterrupt(func() { tearDown(t, c, namespace) }, t.Logf) + defer tearDown(t, c, namespace) + + cond := getCondition("failedcondition", namespace) + if _, err := c.ConditionClient.Create(cond); err != nil { + t.Fatalf("Failed to create Condition `%s`: %s", cond1Name, err) + } + + task := getFailTask("failtask", namespace) + if _, err := c.TaskClient.Create(task); err != nil { + t.Fatalf("Failed to create dag Task: %s", err) + } + + delayedTask := getDelaySuccessTask("delayed-task", namespace) + if _, err := c.TaskClient.Create(delayedTask); err != nil { + t.Fatalf("Failed to create dag Task: %s", err) + } + + finalTask := getSuccessTask("successtask", namespace) + if _, err := c.TaskClient.Create(finalTask); err != nil { + t.Fatalf("Failed to create final Task: %s", err) + } + + pipeline := getPipeline( + namespace, + "pipeline-failed-dag-tasks", + map[string]string{ + "dagtask1": "failtask", + "dagtask2": "delayed-task", + "dagtask3": "successtask", + }, + map[string]string{ + "dagtask3": "failedcondition", + }, + map[string]string{ + "finaltask1": "successtask", + }, + ) + if _, err := c.PipelineClient.Create(pipeline); err != nil { + t.Fatalf("Failed to create Pipeline: %s", err) + } + + pipelineRun := getPipelineRun(namespace, "pipelinerun-failed-dag-tasks", "pipeline-failed-dag-tasks") + if _, err := c.PipelineRunClient.Create(pipelineRun); err != nil { + t.Fatalf("Failed to create Pipeline Run `%s`: %s", "pipelinerun-failed-dag-tasks", err) + } + + if err := WaitForPipelineRunState(c, "pipelinerun-failed-dag-tasks", timeout, PipelineRunFailed("pipelinerun-failed-dag-tasks"), "PipelineRunFailed"); err != nil { + t.Fatalf("Waiting for PipelineRun %s to fail: %v", "pipelinerun-failed-dag-tasks", err) + } + + taskrunList, err := c.TaskRunClient.List(metav1.ListOptions{LabelSelector: "tekton.dev/pipelineRun=pipelinerun-failed-dag-tasks"}) + if err != nil { + t.Fatalf("Error listing TaskRuns for PipelineRun %s: %s", "pipelinerun-failed-dag-tasks", err) + } + + var dagTask1EndTime, dagTask2EndTime, finalTaskStartTime *metav1.Time + // verify dag task failed, parallel dag task succeeded, and final task succeeded + for _, taskrunItem := range taskrunList.Items { + switch n := taskrunItem.Name; { + case strings.HasPrefix(n, "pipelinerun-failed-dag-tasks-dagtask1"): + if !isFailed(t, n, taskrunItem.Status.Conditions) { + t.Fatalf("TaskRun %s for dag task should have failed", n) + } + dagTask1EndTime = taskrunItem.Status.CompletionTime + case strings.HasPrefix(n, "pipelinerun-failed-dag-tasks-dagtask2"): + if err := WaitForTaskRunState(c, n, TaskRunSucceed(n), "TaskRunSuccess"); err != nil { + t.Errorf("Error waiting for TaskRun to succeed: %v", err) + } + dagTask2EndTime = taskrunItem.Status.CompletionTime + case strings.HasPrefix(n, "pipelinerun-failed-dag-tasks-dagtask3"): + if !isSkipped(t, n, taskrunItem.Status.Conditions) { + t.Fatalf("TaskRun %s for dag task should have skipped due to condition failure", n) + } + case strings.HasPrefix(n, "pipelinerun-failed-dag-tasks-finaltask1"): + if err := WaitForTaskRunState(c, n, TaskRunSucceed(n), "TaskRunSuccess"); err != nil { + t.Errorf("Error waiting for TaskRun to succeed: %v", err) + } + finalTaskStartTime = taskrunItem.Status.StartTime + default: + t.Fatalf("TaskRuns were not found for both final and dag tasks") + } + } + // final task should start executing after dagtask1 fails and dagtask2 is done + if finalTaskStartTime.Before(dagTask1EndTime) || finalTaskStartTime.Before(dagTask2EndTime) { + t.Fatalf("Final Tasks should start getting executed after all DAG tasks finishes") + } +} + +func TestPipelineLevelFinally_OneFinalTaskFailed_Failure(t *testing.T) { + c, namespace := setup(t) + knativetest.CleanupOnInterrupt(func() { tearDown(t, c, namespace) }, t.Logf) + defer tearDown(t, c, namespace) + + task := getSuccessTask("successtask", namespace) + if _, err := c.TaskClient.Create(task); err != nil { + t.Fatalf("Failed to create dag Task: %s", err) + } + + finalTask := getFailTask("failtask", namespace) + if _, err := c.TaskClient.Create(finalTask); err != nil { + t.Fatalf("Failed to create final Task: %s", err) + } + + pipeline := getPipeline( + namespace, + "pipeline-failed-final-tasks", + map[string]string{ + "dagtask1": "successtask", + }, + map[string]string{}, + map[string]string{ + "finaltask1": "failtask", + }, + ) + if _, err := c.PipelineClient.Create(pipeline); err != nil { + t.Fatalf("Failed to create Pipeline: %s", err) + } + + pipelineRun := getPipelineRun(namespace, "pipelinerun-failed-final-tasks", "pipeline-failed-final-tasks") + if _, err := c.PipelineRunClient.Create(pipelineRun); err != nil { + t.Fatalf("Failed to create Pipeline Run `%s`: %s", "pipelinerun-failed-final-tasks", err) + } + + if err := WaitForPipelineRunState(c, "pipelinerun-failed-final-tasks", timeout, PipelineRunFailed("pipelinerun-failed-final-tasks"), "PipelineRunFailed"); err != nil { + t.Errorf("Error waiting for PipelineRun %s to finish: %s", "pipelinerun-failed-final-tasks", err) + t.Fatalf("PipelineRun execution failed") + } + + taskrunList, err := c.TaskRunClient.List(metav1.ListOptions{LabelSelector: "tekton.dev/pipelineRun=pipelinerun-failed-final-tasks"}) + if err != nil { + t.Fatalf("Error listing TaskRuns for PipelineRun %s: %s", "pipelinerun-failed-final-tasks", err) + } + + // verify dag task succeeded and final task failed + for _, taskrunItem := range taskrunList.Items { + switch n := taskrunItem.Name; { + case strings.HasPrefix(n, "pipelinerun-failed-final-tasks-dagtask1"): + if !isSuccessful(t, n, taskrunItem.Status.Conditions) { + t.Fatalf("TaskRun %s for dag task should have succedded", n) + } + case strings.HasPrefix(n, "pipelinerun-failed-final-tasks-finaltask1"): + if !isFailed(t, n, taskrunItem.Status.Conditions) { + t.Fatalf("TaskRun %s for final task should have failed", n) + } + default: + t.Fatalf("TaskRuns were not found for both final and dag tasks") + } + } +} + +func isSuccessful(t *testing.T, taskRunName string, conds duckv1beta1.Conditions) bool { + for _, c := range conds { + if c.Type == apis.ConditionSucceeded { + if c.Status != corev1.ConditionTrue { + t.Errorf("TaskRun status %q is not succeeded, got %q", taskRunName, c.Status) + } + return true + } + } + t.Errorf("TaskRun status %q had no Succeeded condition", taskRunName) + return false +} + +func isSkipped(t *testing.T, taskRunName string, conds duckv1beta1.Conditions) bool { + for _, c := range conds { + if c.Type == apis.ConditionSucceeded { + if c.Status != corev1.ConditionFalse && c.Reason != resources.ReasonConditionCheckFailed { + t.Errorf("TaskRun status %q is not skipped due to condition failure, got %q", taskRunName, c.Status) + } + return true + } + } + t.Errorf("TaskRun status %q had no Succeeded condition", taskRunName) + return false +} + +func getTaskDef(n, namespace, script string) *v1beta1.Task { + return &v1beta1.Task{ + ObjectMeta: metav1.ObjectMeta{Name: n, Namespace: namespace}, + Spec: v1beta1.TaskSpec{ + Steps: []v1beta1.Step{{ + Container: corev1.Container{Image: "alpine"}, + Script: script, + }}, + }, + } +} + +func getSuccessTask(n, namespace string) *v1beta1.Task { + return getTaskDef(n, namespace, "exit 0") +} + +func getFailTask(n, namespace string) *v1beta1.Task { + return getTaskDef(n, namespace, "exit 1") +} + +func getDelaySuccessTask(n, namespace string) *v1beta1.Task { + return getTaskDef(n, namespace, "sleep 5; exit 0") +} + +func getCondition(n, namespace string) *v1alpha1.Condition { + return &v1alpha1.Condition{ + ObjectMeta: metav1.ObjectMeta{Name: n, Namespace: namespace}, + Spec: v1alpha1.ConditionSpec{ + Check: v1alpha1.Step{ + Container: corev1.Container{Image: "ubuntu"}, + Script: "exit 1", + }, + }, + } +} + +func getPipeline(namespace, p string, t map[string]string, c map[string]string, f map[string]string) *v1beta1.Pipeline { + var pt []v1beta1.PipelineTask + var fpt []v1beta1.PipelineTask + for k, v := range t { + task := v1beta1.PipelineTask{ + Name: k, + TaskRef: &v1beta1.TaskRef{Name: v}, + } + if _, ok := c[k]; ok { + task.Conditions = []v1beta1.PipelineTaskCondition{{ + ConditionRef: c[k], + }} + } + pt = append(pt, task) + } + for k, v := range f { + fpt = append(fpt, v1beta1.PipelineTask{ + Name: k, + TaskRef: &v1beta1.TaskRef{Name: v}, + }) + } + pipeline := &v1beta1.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: p, Namespace: namespace}, + Spec: v1beta1.PipelineSpec{ + Tasks: pt, + Finally: fpt, + }, + } + return pipeline +} + +func getPipelineRun(namespace, pr, p string) *v1beta1.PipelineRun { + return &v1beta1.PipelineRun{ + ObjectMeta: metav1.ObjectMeta{Name: pr, Namespace: namespace}, + Spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: p}, + }, + } +}