Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add common labels and annotations to infra and workload parts #1087

Closed
fabiand opened this issue May 28, 2018 · 15 comments
Closed

Add common labels and annotations to infra and workload parts #1087

fabiand opened this issue May 28, 2018 · 15 comments
Assignees
Labels
good-first-issue Identifies an issue that has been specifically created or selected for first-time contributors. help-wanted kind/enhancement lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. size/S topic/documentation topic/infrastructure

Comments

@fabiand
Copy link
Member

fabiand commented May 28, 2018

In this PR kubernetes/website#8703 SIG App is defining a common set of labels to add some sematics to Kubernetes objects in order to allow "tooling" to discover relationships between them and understand what an application is composed of.

KubeVirt should follow this pattern.

Benefits:

  • Users who know the pattern will nuderstand KubeVirt better
  • UIs which use the pattern will be able to work with KubeVirt
  • We gather experience which we can provide as feedback to sig app
@fabiand fabiand added kind/enhancement help-wanted size/S topic/documentation good-first-issue Identifies an issue that has been specifically created or selected for first-time contributors. labels May 28, 2018
@fabiand
Copy link
Member Author

fabiand commented May 28, 2018

Relates a little to #965

@fabiand fabiand added this to the v2 milestone Jun 12, 2018
@SchSeba
Copy link
Contributor

SchSeba commented Jul 5, 2018

Developer Info:

For this issue we need to work and a couple of different files.

  1. Add version to the build-manifests script

    • Add source hack/version.sh to the build-manifests.sh file inside hack folder
    • Run the kubevirt::version::get_version_vars function and add the KUBEVIRT_GIT_VERSION variable to the manifest-templator in line 43 and 50 with a new parameter --git-version
  2. Add labels to our manifest

    • Add the follow labels to the yaml's in the manifests folder under dev and release
app.kubernetes.io/name: kubevirt
app.kubernetes.io/version: {{.version}}
app.kubernetes.io/managed-by: kubectl
  1. Add logic to the manifest-templator

    • Add a new flag named git-version
    • Add the new flag to the template struct
    • Split the git-version variable and take everything before the +
  2. Add manage-by to the pkg/api/types const section

KubernetesManageBy string = "app.kubernetes.io/managed-by"
  1. Add manage-by label to the pod

    • Add pod.label[KubernetesManageBy]: "kubevirt" in file pkg/virt-controller/template under line 307
  2. Add manage-by and version to vm object

    • Put the manage and version labels inside the pkg/virt-controller/vm inside the updateStatus function
    • manage-by: kubevirt
    • version: <user the version package>.GitVersion
  3. Repeat section 6 for vmi replicaset and preset files.

@tripledes
Copy link

hey @SchSeba , not sure if anyone started to work on this, haven't seen anyone assigned so I decided to give it a go.

I just pushed to my fork some initial changes but got some doubts about points 2, 6 and 7.

  • On 2, not sure if all objects should have the labels, for instance, cluster role bindings, services...and on deployments, should the deployment itself have them or should I just add them under template ?

  • On 6,7, been looking at updateStatus and have doubts on how to add managed-by and version, would it be something like vm.KubernetesManagedBy = "kubectl" ?

Small clarification on point 5 would it be kubevirt or kubectl ?

Thanks!

@SchSeba
Copy link
Contributor

SchSeba commented Jul 12, 2018

Hi @tripledes!! :)

Thanks for helping us make kubevirt amazing.

On 2, not sure if all objects should have the labels, for instance, cluster role bindings, services...and on deployments, should the deployment itself have them or should I just add them under template ?

I think just add to all the kinds inside our templates.

On 6,7, been looking at updateStatus and have doubts on how to add managed-by and version, would it be something like vm.KubernetesManagedBy = "kubectl" ?

Just config a const string like this const KubernetesManageBy string = "app.kubernetes.io/managed-by"

Then just like the pod.label you can add it with vmi.label we are using the same meta data like kubernetes so you should have a label list in the object.

Please replay on the issue if you have any problem or any other question

Thanks again!

@tripledes
Copy link

tripledes commented Jul 15, 2018

@SchSeba Thanks for the tips! I just did a new push to my fork https://github.com/tripledes/kubevirt/commit/bd5455b35c4c96402f6a84f5171372f9009b7255.

Currently it seems to compile (testing as I write), will have a look to the result. If you happen to get some time to review the commit, please do let me know if you see anything wrong.

Regarding unit tests, I haven't had a look yet but I assume I'll have to modify or add some so we make sure the labels are there right?

UPDATE: Doing some testing, the controller was crashing at me cause the Labels map for VM/I/RS was not initialized, so I added the following code to updateStatus():

if vm.Labels == nil {
	vm.Labels = map[string]string{}
}

That seems to do the trick, but now I get:

level=info timestamp=2018-07-16T08:16:48.076988Z pos=vm.go:112 component=virt-controller service=http reason="VirtualMachine.kubevirt.io \"testvm\" is invalid: metadata.labels: Invalid value: \"v0.7.0-25+bd5455b35c4c96-dirty\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')" msg="re-enqueuing VirtualMachine default/testvm"

That's because I'm using version.GitVersion which includes a +, so I guess the + should be either replaced by something that would match the validation regex or just remove everything from it. Would it be OK to add a field Version to version.Info which would split GitVersion and take the left value from + ?

@tripledes
Copy link

Hey, I've updated the code, https://github.com/tripledes/kubevirt/commit/94ee3c11b9cc1ecf028c72778dd20a7fbbf88e81

Currently doing the split in updateStatus(), but I don't feel like it's the right place as it's an static value so we probably shouldn't be executing split every time updateStatus() is called.

So far I've used https://raw.githubusercontent.com/kubevirt/demo/master/manifests/vm.yaml to test and the labels are added:

➜ kubevirt issue_1087 ✓ cluster/kubectl.sh get vm -o yaml | grep -A 3 labels:           
    labels:
      app.kubernetes.io/managed-by: kubectl
      app.kubernetes.io/version: v0.7.0-25
    name: testvm

➜ kubevirt issue_1087 ✓ cluster/kubectl.sh get vmi -o yaml | grep -A 3 labels           
    labels:
      app.kubernetes.io/managed-by: kubectl
      app.kubernetes.io/version: v0.7.0-25
      guest: testvm

@SchSeba
Copy link
Contributor

SchSeba commented Jul 16, 2018

Hey @tripledes good job!

I agree with you the split don't need to be there.

My recommendation is to make the split in the version.sh level.

We want to print the version command just like kubernetes

Kubernetes version output

version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

KubeVirt version output

version.Info{GitVersion:"v0.7.0-42+130b88a1abbe6c-dirty", GitCommit:"130b88a1abbe6ceab3f506ddf65fc557310927dc", GitTreeState:"dirty", BuildDate:"2018-07-15T08:49:21Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}

So I think it will be good to remove everything after the - character.

@fabiand sounds good?

@tripledes
Copy link

@SchSeba @fabiand I was checking out how they do it on Kubernetes and found out that the output from last comment is likely from a K8s release, building it from master I get:

Client Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.0-alpha.0.2410+6764a795869d26", GitCommit:"6764a795869d2631eb75f222f776ec5a80b61e37", GitTreeState:"clean", BuildDate:"2018-07-24T08:03:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

I couldn't find anywhere in k8s code where they use the version for labels, so I guess that's why they don't bother.

If we still want to use version as a label and keep GitVersion as it is today (for consistency with k8s), a possible solution could be to have another variable, like KUBEVIRT_RELEASE_VERSION which would be KUBEVIRT_GIT_VERSION but just the vX.Y.Z part of it, then add a field to the version.Info, like Release and use it for app.kubernetes.io/version.

How does that sound?

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

4 similar comments
@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2018
@fabiand
Copy link
Member Author

fabiand commented Oct 29, 2018

/close

Closing this for now. Will probably be revisited by the operator peeps.

/cc @rthallisey some annotations that the operator might want to add (see the initial link in the description)

@kubevirt-bot
Copy link
Contributor

@fabiand: Closing this issue.

In response to this:

/close

Closing this for now. Will probably be revisited by the operator peeps.

/cc @rthallisey some annotations that the operator might want to add (see the initial link in the description)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good-first-issue Identifies an issue that has been specifically created or selected for first-time contributors. help-wanted kind/enhancement lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. size/S topic/documentation topic/infrastructure
Projects
None yet
Development

No branches or pull requests

4 participants