Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new Cluster resource #206

Merged
merged 10 commits into from
Nov 9, 2018
Merged

Conversation

nader-ziada
Copy link
Member

@nader-ziada nader-ziada commented Oct 30, 2018

Fixes: #177

To enable tasks to connect to an external cluster other than the host
cluster, for example to deploy a app.

Proposed changes

  • new resource of type Cluster
  • cluster resource adds a step to create a kubeconfig for target cluster using a new binary kubeconfig-writer
  • integration test to check kubeconfig
  • resource validation to check for cluster resource
  • remove clusters from pipeline and pipeline params types
  • change examples to use cluster resource
  • add documentation in using.md

@knative-prow-robot knative-prow-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 30, 2018
@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 30, 2018
@knative-prow-robot knative-prow-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 30, 2018
@knative-prow-robot knative-prow-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Oct 30, 2018
@knative-prow-robot knative-prow-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 2, 2018
Copy link
Collaborator

@bobcatfish bobcatfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lookin good!

I think you're planning to do this already, but it would be great to have some basic docs for the kubeconfig binary itself

cmd/kubeconfig/main.go Outdated Show resolved Hide resolved
cmd/kubeconfig/main.go Outdated Show resolved Hide resolved
cmd/kubeconfig/main.go Outdated Show resolved Hide resolved
cmd/kubeconfig/main.go Outdated Show resolved Hide resolved
c.APIVersion = "v1"
c.Kind = "Config"

destinationFile := fmt.Sprintf("/workspace/%s/kubeconfig", resource.ClusterName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe the path should be configurable?

cmd/kubeconfig/main.go Outdated Show resolved Hide resolved
pkg/apis/pipeline/v1alpha1/cluster_resource.go Outdated Show resolved Hide resolved
@knative-prow-robot knative-prow-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Nov 6, 2018
To enable tasks to connect to an external cluster other than the host
cluster, for example to deploy a app. tektoncd#177
- input resources will add a step to create kubeconfig from resource
- container image cmd will create the file in the shared volume for a build
- deploy tasks can then use the config to perform actions on cluster
- The resource can now include a SecretParam to indicate which secret
should be used to populate a field in the resource
- cleanup comments
- rename kubeconfig to kuneconfig writer to make it more clear
- add success log info message
Now that cluster is a type of resource, removing other ways of adding a cluster to a task or pipeline
@nader-ziada nader-ziada changed the title [WIP] new cluster resource new Cluster resource Nov 7, 2018
@knative-prow-robot knative-prow-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 7, 2018
Copy link
Collaborator

@bobcatfish bobcatfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for figuring out how to get the secrets to work! 🎉

left some comments, mostly I think with a couple of changes we can have slightly fewer, but more focused and eaiser to debug, tests

docs/using.md Outdated Show resolved Hide resolved
docs/using.md Outdated Show resolved Hide resolved
secretName: target-cluster-secrets
- fieldName: cadata
secretKey: cadataKey
secretName: target-cluster-secrets
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome!

examples/deploy_tasks.yaml Outdated Show resolved Hide resolved
examples/deploy_tasks.yaml Outdated Show resolved Hide resolved
@@ -67,9 +67,6 @@ type PipelineTask struct {
InputSourceBindings []SourceBinding `json:"inputSourceBindings,omitempty"`
// +optional
OutputSourceBindings []SourceBinding `json:"outputSourceBindings,omitempty"`
// TODO(#68) Cluster should become a type of Resource
// +optional
ClusterBindings []ClusterBinding `json:"clusterBindings,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

Value: "http://10.10.10.10",
}, {
Name: "CAdata",
// echo "my-ca-cert" | base64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be worth mentioning this in the docs on how to use it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it just how you create a secret, which I link to the docs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kk

} {
setUp()
}, {
desc: "cluster resource with plain text",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you put the logic in case v1alpha1.PipelineResourceTypeCluster: into a separate function, you could test that function directly instead of testing it a this level

if we did that, id be happy to remove these cases completely (or just keep one at most)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But these cases are testing two different things,

  • cluster resource with plain text is validating the step is added to the build
  • cluster resource with secrets is validating the env from the secrets are added to the build step

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yeah for sure, we'd still have the 2 different tests, but we could have them at a different level.

caveat: i made a bit of a mistake and i thought this was the reconciler test! so i think what im saying is a lot less important now

however i still think this logic could be factored a bit differently so that it's slightly more clear and testable (esp as we keep adding more types of resources)

for example if you had functions like these:

func getClusterStep(r *v1alpha1.PipelineResource) (*corev1.Container, error) {
	clusterResource, err := v1alpha1.NewClusterResource(r)
	if err != nil {
		return nil, fmt.Errorf("error creating cluster resource %q, resource must be invalid: %s", r.Name, err)
	}

	var envVars []corev1.EnvVar
	for _, sec := range clusterResource.Secrets {
		ev := corev1.EnvVar{
			Name: strings.ToUpper(sec.FieldName),
			ValueFrom: &corev1.EnvVarSource{
				SecretKeyRef: &corev1.SecretKeySelector{
					LocalObjectReference: corev1.LocalObjectReference{
						Name: sec.SecretName,
					},
					Key: sec.SecretKey,
				},
			},
		}
		envVars = append(envVars, ev)
	}

	clusterContainer := &corev1.Container{
		Name:  "kubeconfig",
		Image: *kubeconfigWriterImage,
		Args: []string{
			"-clusterConfig", clusterResource.String(),
		},
		Env: envVars,
	}
	return clusterContainer, nil
}

// or you could even make this functional, have it return a new build object!
func addBuildStep(b *buildv1alpha1.Build, c *corev1.Container) {
	buildSteps := append([]corev1.Container{*c}, b.Spec.Steps...)
	b.Spec.Steps = buildSteps
}

You could write unit tests to cover both of the above, which due to their more specific interface don't even need task runs or tasks to test.

Then inside of AddInputResource you could have:

		case v1alpha1.PipelineResourceTypeCluster:
			{
				clusterContainlusterContainerer, err := getClusterStep(resource)
				if err != nil {
					return nil, fmt.Errorf("couldnt add cluster resource to task %q: %s", task.Name, err)
				}
				addBuildStep(build, clusterContainlusterContainerer)
			}
		}

And suddenly since AddInputResource is only calling out to 2 functions, both of which we have unit tested well, it's a lot less important (i would say not even worth it) to have test coverage at this level, and we can save ourselves a lot of boiler plate in our test code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will refactor the func out of the main block of code, but would still like to keep the table test to be testing the external interface (contract) of the AddInputResource functionality. Testing the private function ties down the implementation. wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @pivotal-nader-ziada thought of testing the functionality instead of implementation. Unit tests should provide the confidence to refactor code without everything completely breaking apart. Testing private functions tend to restrict the refactor scope without altering the tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you're saying - I think this means we should push the interface changes up a level, i.e. updated the interface to AddInputResource so that it doesn't need to have TaskRuns and Tasks instantiated in order to test it :D

We can keep iterating on it.

Unit tests should provide the confidence to refactor code without everything completely breaking apart.

I dunno, I think we should limit the amount of code that one can change without also changing a unit test (and we should limit that to the "imperative shell").

I think very finely grained and focused unit tests put pressure on us to design our code better - and I think the fact that so many objects need to be instantiated for a lot of our existing unit tests is a sign we can improve the design.

Testing private functions tend to restrict the refactor scope without altering the tests.

I find that when we start wanting to test private functions, its often a sign that we can break up the code more, such that these private functions become public. For example, perhaps we want to have a package that is about clusters specifically, which could have a public function like getClusterStep

I find this tweet from dave cheney related and inspiring:

use packages like a knife to separate code that consumes functionality from code that provide functionality. Use unit tests along the cut line to lock in the behaviour of your packages.

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

func TestClusterResource(t *testing.T) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i feel like we already have a lot of integration tests - what if we had the helm integration test use a cluster resource instead, then we could just have the one test?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this test is specific to the creation of the kubeconfig file, I would prefer to remove the kaniko test instead, WDYT?

Copy link
Contributor

@shashwathi shashwathi Nov 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the concern is only about time then we could consider refactoring tests to be parallel since all e2e tests are self contained in their namespaces.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my concern is less about time and more about:

  1. Maintaining the integration tests
  2. Relying on integration tests for coverage

this is definitely a philosophical point of view, but i recently watched integration tests are a scam and i realized that relying too much on integration tests can cause us to be lax with our unit test coverage, which in turn promotes bad design choices.

This is besides the fact that integration tests are slow and when they fail, it's very hard to know what to do.

(Watch this space for my upcoming kubecon talk which is pretty much all about this!!)

So I would like to minimize the surface of what we test in our integration tests as much as possible, and focus instead of organizing and unit testing our code in such a way that we don't feel like we need to cover it in integration tests #the-possibly-unrealistic-ideal

I feel this test is specific to the creation of the kubeconfig file, I would prefer to remove the kaniko test instead, WDYT?

in an ideal world what i would really like is just one integration test that includes as much functionality as we think is reasonable, e.g. it uses a kube cluster, has a service account, builds and pushes an image.

Anyway feel free to ignore this and go ahead with this test but you'll probably see a PR from me in the future where I try to move our integration tests toward this ideal - we can always continue the discussion there :D

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this specific case, I can't really merge this integration test checking for kubeconfig with the helm one because we would need a actual 2nd cluster to target by the helm deployment. The kaniko test we can delete, or maybe the hello-world?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't really merge this integration test checking for kubeconfig with the helm one because we would need a actual 2nd cluster to target by the helm deployment.

oh i see, i didnt realize that the integration test wasn't actually using the kubeconfig

would you mind making a follow-up issue in our backlog that we should consider having an integration test that actually uses a second cluster?

alternatively:
we could use the cluster resource to (somehow) point at the current cluster, but maybe a different namespace or something?

make the tests simplier and easier to debug
update examples
fix docs
//only one authentication technique per user is allowed in a kubeconfig, so clear out the password if a token is provided
user := resource.Username
pass := resource.Password
if resource.Token != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its helpful to log if username password or token is used for authentication. If user has provided both and choice is made then it would be useful data point for debugging.

Copy link
Contributor

@shashwathi shashwathi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job @pivotal-nader-ziada :)

@knative-prow-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pivotal-nader-ziada, shashwathi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [pivotal-nader-ziada]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@bobcatfish
Copy link
Collaborator

Looking good! if you get tired of the philosophical debate about the tests feel free to remove this and go ahead and merge:
/hold

/lgtm

(But if you can see the same dream I see, a dream of many narrowly focused unit tests and very very very few integration tests, i think we could make this even more awesome!)

@knative-prow-robot knative-prow-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm Indicates that a PR is ready to be merged. labels Nov 8, 2018
@nader-ziada
Copy link
Member Author

I like your dream of many unit tests and few integration tests @bobcatfish, let see what I can do :D

@knative-prow-robot knative-prow-robot removed the lgtm Indicates that a PR is ready to be merged. label Nov 8, 2018
@knative-metrics-robot
Copy link

The following is the coverage report on pkg/.
Say /test pull-knative-build-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
cmd/kubeconfigwriter/main.go Do not exist 0.0%
pkg/apis/pipeline/v1alpha1/cluster_resource.go Do not exist 59.4%
pkg/apis/pipeline/v1alpha1/pipelineparams_validation.go 76.3% 75.9% -0.5
pkg/apis/pipeline/v1alpha1/pipelineresource_validation.go 0.0% 68.2% 68.2
pkg/reconciler/v1alpha1/taskrun/resources/input_resources.go 84.6% 88.6% 4.0

@nader-ziada
Copy link
Member Author

/retest

@bobcatfish
Copy link
Collaborator

/lgtm

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Nov 8, 2018
@nader-ziada
Copy link
Member Author

/hold cancel

@knative-prow-robot knative-prow-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 9, 2018
@knative-prow-robot knative-prow-robot merged commit 2dfe260 into tektoncd:master Nov 9, 2018
@nader-ziada nader-ziada deleted the cluster branch November 29, 2018 20:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants