-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new Cluster resource #206
new Cluster resource #206
Conversation
9be29a8
to
f51a585
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lookin good!
I think you're planning to do this already, but it would be great to have some basic docs for the kubeconfig
binary itself
cmd/kubeconfig/main.go
Outdated
c.APIVersion = "v1" | ||
c.Kind = "Config" | ||
|
||
destinationFile := fmt.Sprintf("/workspace/%s/kubeconfig", resource.ClusterName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe the path should be configurable?
71dc0b9
to
7493de9
Compare
To enable tasks to connect to an external cluster other than the host cluster, for example to deploy a app. tektoncd#177
- input resources will add a step to create kubeconfig from resource - container image cmd will create the file in the shared volume for a build - deploy tasks can then use the config to perform actions on cluster
- The resource can now include a SecretParam to indicate which secret should be used to populate a field in the resource
- cleanup comments - rename kubeconfig to kuneconfig writer to make it more clear - add success log info message
Now that cluster is a type of resource, removing other ways of adding a cluster to a task or pipeline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for figuring out how to get the secrets to work! 🎉
left some comments, mostly I think with a couple of changes we can have slightly fewer, but more focused and eaiser to debug, tests
secretName: target-cluster-secrets | ||
- fieldName: cadata | ||
secretKey: cadataKey | ||
secretName: target-cluster-secrets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome!
@@ -67,9 +67,6 @@ type PipelineTask struct { | |||
InputSourceBindings []SourceBinding `json:"inputSourceBindings,omitempty"` | |||
// +optional | |||
OutputSourceBindings []SourceBinding `json:"outputSourceBindings,omitempty"` | |||
// TODO(#68) Cluster should become a type of Resource | |||
// +optional | |||
ClusterBindings []ClusterBinding `json:"clusterBindings,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
Value: "http://10.10.10.10", | ||
}, { | ||
Name: "CAdata", | ||
// echo "my-ca-cert" | base64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be worth mentioning this in the docs on how to use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it just how you create a secret, which I link to the docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kk
} { | ||
setUp() | ||
}, { | ||
desc: "cluster resource with plain text", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you put the logic in case v1alpha1.PipelineResourceTypeCluster:
into a separate function, you could test that function directly instead of testing it a this level
if we did that, id be happy to remove these cases completely (or just keep one at most)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But these cases are testing two different things,
cluster resource with plain text
is validating the step is added to the buildcluster resource with secrets
is validating the env from the secrets are added to the build step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yeah for sure, we'd still have the 2 different tests, but we could have them at a different level.
caveat: i made a bit of a mistake and i thought this was the reconciler test! so i think what im saying is a lot less important now
however i still think this logic could be factored a bit differently so that it's slightly more clear and testable (esp as we keep adding more types of resources)
for example if you had functions like these:
func getClusterStep(r *v1alpha1.PipelineResource) (*corev1.Container, error) {
clusterResource, err := v1alpha1.NewClusterResource(r)
if err != nil {
return nil, fmt.Errorf("error creating cluster resource %q, resource must be invalid: %s", r.Name, err)
}
var envVars []corev1.EnvVar
for _, sec := range clusterResource.Secrets {
ev := corev1.EnvVar{
Name: strings.ToUpper(sec.FieldName),
ValueFrom: &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{
Name: sec.SecretName,
},
Key: sec.SecretKey,
},
},
}
envVars = append(envVars, ev)
}
clusterContainer := &corev1.Container{
Name: "kubeconfig",
Image: *kubeconfigWriterImage,
Args: []string{
"-clusterConfig", clusterResource.String(),
},
Env: envVars,
}
return clusterContainer, nil
}
// or you could even make this functional, have it return a new build object!
func addBuildStep(b *buildv1alpha1.Build, c *corev1.Container) {
buildSteps := append([]corev1.Container{*c}, b.Spec.Steps...)
b.Spec.Steps = buildSteps
}
You could write unit tests to cover both of the above, which due to their more specific interface don't even need task runs or tasks to test.
Then inside of AddInputResource
you could have:
case v1alpha1.PipelineResourceTypeCluster:
{
clusterContainlusterContainerer, err := getClusterStep(resource)
if err != nil {
return nil, fmt.Errorf("couldnt add cluster resource to task %q: %s", task.Name, err)
}
addBuildStep(build, clusterContainlusterContainerer)
}
}
And suddenly since AddInputResource
is only calling out to 2 functions, both of which we have unit tested well, it's a lot less important (i would say not even worth it) to have test coverage at this level, and we can save ourselves a lot of boiler plate in our test code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will refactor the func out of the main block of code, but would still like to keep the table test to be testing the external interface (contract) of the AddInputResource
functionality. Testing the private function ties down the implementation. wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @pivotal-nader-ziada thought of testing the functionality instead of implementation. Unit tests should provide the confidence to refactor code without everything completely breaking apart. Testing private functions tend to restrict the refactor scope without altering the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you're saying - I think this means we should push the interface changes up a level, i.e. updated the interface to AddInputResource
so that it doesn't need to have TaskRuns and Tasks instantiated in order to test it :D
We can keep iterating on it.
Unit tests should provide the confidence to refactor code without everything completely breaking apart.
I dunno, I think we should limit the amount of code that one can change without also changing a unit test (and we should limit that to the "imperative shell").
I think very finely grained and focused unit tests put pressure on us to design our code better - and I think the fact that so many objects need to be instantiated for a lot of our existing unit tests is a sign we can improve the design.
Testing private functions tend to restrict the refactor scope without altering the tests.
I find that when we start wanting to test private functions, its often a sign that we can break up the code more, such that these private functions become public. For example, perhaps we want to have a package that is about clusters specifically, which could have a public function like getClusterStep
I find this tweet from dave cheney related and inspiring:
use packages like a knife to separate code that consumes functionality from code that provide functionality. Use unit tests along the cut line to lock in the behaviour of your packages.
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
) | ||
|
||
func TestClusterResource(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i feel like we already have a lot of integration tests - what if we had the helm integration test use a cluster resource instead, then we could just have the one test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel this test is specific to the creation of the kubeconfig file, I would prefer to remove the kaniko test instead, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the concern is only about time then we could consider refactoring tests to be parallel since all e2e tests are self contained in their namespaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my concern is less about time and more about:
- Maintaining the integration tests
- Relying on integration tests for coverage
this is definitely a philosophical point of view, but i recently watched integration tests are a scam and i realized that relying too much on integration tests can cause us to be lax with our unit test coverage, which in turn promotes bad design choices.
This is besides the fact that integration tests are slow and when they fail, it's very hard to know what to do.
(Watch this space for my upcoming kubecon talk which is pretty much all about this!!)
So I would like to minimize the surface of what we test in our integration tests as much as possible, and focus instead of organizing and unit testing our code in such a way that we don't feel like we need to cover it in integration tests #the-possibly-unrealistic-ideal
I feel this test is specific to the creation of the kubeconfig file, I would prefer to remove the kaniko test instead, WDYT?
in an ideal world what i would really like is just one integration test that includes as much functionality as we think is reasonable, e.g. it uses a kube cluster, has a service account, builds and pushes an image.
Anyway feel free to ignore this and go ahead with this test but you'll probably see a PR from me in the future where I try to move our integration tests toward this ideal - we can always continue the discussion there :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for this specific case, I can't really merge this integration test checking for kubeconfig with the helm one because we would need a actual 2nd cluster to target by the helm deployment. The kaniko test we can delete, or maybe the hello-world?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't really merge this integration test checking for kubeconfig with the helm one because we would need a actual 2nd cluster to target by the helm deployment.
oh i see, i didnt realize that the integration test wasn't actually using the kubeconfig
would you mind making a follow-up issue in our backlog that we should consider having an integration test that actually uses a second cluster?
alternatively:
we could use the cluster resource to (somehow) point at the current cluster, but maybe a different namespace or something?
make the tests simplier and easier to debug update examples fix docs
//only one authentication technique per user is allowed in a kubeconfig, so clear out the password if a token is provided | ||
user := resource.Username | ||
pass := resource.Password | ||
if resource.Token != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its helpful to log if username password
or token
is used for authentication. If user has provided both and choice is made then it would be useful data point for debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job @pivotal-nader-ziada :)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pivotal-nader-ziada, shashwathi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Looking good! if you get tired of the philosophical debate about the tests feel free to remove this and go ahead and merge: /lgtm (But if you can see the same dream I see, a dream of many narrowly focused unit tests and very very very few integration tests, i think we could make this even more awesome!) |
I like your dream of many unit tests and few integration tests @bobcatfish, let see what I can do :D |
The following is the coverage report on pkg/.
|
/retest |
/lgtm |
/hold cancel |
Fixes: #177
To enable tasks to connect to an external cluster other than the host
cluster, for example to deploy a app.
Proposed changes
Cluster
kubeconfig-writer
pipeline
andpipeline params
typesusing.md