Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support metadata labels in ScaledJob template #1311

Closed
mykola-ivadolabs opened this issue Nov 3, 2020 · 15 comments · Fixed by #1322 or #1686
Closed

Support metadata labels in ScaledJob template #1311

mykola-ivadolabs opened this issue Nov 3, 2020 · 15 comments · Fixed by #1322 or #1686
Labels
feature-request All issues for new features that have not been committed to good first issue Good for newcomers help wanted Looking for support from community

Comments

@mykola-ivadolabs
Copy link

mykola-ivadolabs commented Nov 3, 2020

in version 2.0 metadata portion of ScaledJob template is ignored so it is impossible to setup a label for job to be created

Use-Case

To support managed identity in AKS, a label "aadpodidbinding" has to be added to the object that needs pods to be running under managed identity.

To use ScaledJob to create pods that run under managed identity, ScaledJob should support metadata section when defining job template.

Specification

Allow adding labels to pod template using metadata section

apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
  name: {scaled-job-name}
spec:
  jobTargetRef:
    template:
       metadata:
          labels:
            aadpodidbinding: "myidentity"
      spec:
         ...
@mykola-ivadolabs mykola-ivadolabs added feature-request All issues for new features that have not been committed to needs-discussion labels Nov 3, 2020
@mykola-ivadolabs mykola-ivadolabs changed the title Support metadata labels in scaled object template Support metadata labels in scaled job template Nov 3, 2020
@mykola-ivadolabs mykola-ivadolabs changed the title Support metadata labels in scaled job template Support metadata labels in ScaledJob template Nov 3, 2020
@julienperignon
Copy link

julienperignon commented Nov 10, 2020

This was supported in version 1.5 through scaledobject. Not sure why this got removed in version 2 ...
It will be a deal breaker for those who are using Istio and want to disable proxy injection in jobs

@zroubalik
Copy link
Member

zroubalik commented Nov 10, 2020

ScaledObject is mirroing labels to the created HPA:

keda/controllers/hpa.go

Lines 67 to 69 in 50bec80

for key, value := range scaledObject.ObjectMeta.Labels {
labels[key] = value
}

Would similar solution for ScaledJob help you?

And as a temporary workaround you can disable proxy injection on the namespace.

@julienperignon
Copy link

ScaledObject is mirroing labels to the created HPA:

keda/controllers/hpa.go

Lines 67 to 69 in 50bec80

for key, value := range scaledObject.ObjectMeta.Labels {
labels[key] = value
}

Would similar solution for ScaledJob help you?

And as a temporary workaround you can disable proxy injection on the namespace.

Yeah something similar would work. I have 0 knowledge of Go unfortunately so not really able to submit a PR for this.

Unfortunately my jobs are in the same namespace s my microservices so cannot disable injection at the namespace level.

@zroubalik zroubalik added good first issue Good for newcomers help wanted Looking for support from community and removed needs-discussion labels Nov 11, 2020
@AIchovy
Copy link
Contributor

AIchovy commented Nov 12, 2020

/assign I try to solve this issue.

@Kesudan
Copy link

Kesudan commented Nov 13, 2020

We have run into the same issue and having to go back to 1.5 as this is a show stopper for us, unless there is any other way of assigning labels for ScaledJob in 2.0?

@zroubalik
Copy link
Member

Reopening, since the problem is not fully solved: #1322 (comment)

@pplavetzki
Copy link

pplavetzki commented Feb 20, 2021

I've been testing this for a few hours and it seems that the metadata isn't being deserialized to the ScaledJob object. When I submit a scaledjob like this:

apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
  name: redis-sampler-7
spec:
  jobTargetRef:
    parallelism: 1                            
    completions: 1                            
    activeDeadlineSeconds: 1000                
    backoffLimit: 5                           # Specifies the number of retries before marking this job failed. Defaults to 6
    template:
      metadata:
        labels:
          name: "job-pod"
          work: "full"
        annotations:
          sidecar.istio.io/inject: "false"
          do.something.dot.com: "true"
      spec:
        containers:
        - name: redis-client
          image: redis-client:v0.0.1
          env:
          - name: REDIS_HOST
            value: "redis-svc"
          - name: REDIS_PORT
            value: "6379"
          - name: REDIS_LIST
            value: "mylist"
  pollingInterval: 30                         # Optional. Default: 30 seconds
  successfulJobsHistoryLimit: 5               # Optional. Default: 100. How many completed jobs should be kept.
  failedJobsHistoryLimit: 5                   # Optional. Default: 100. How many failed jobs should be kept.
  maxReplicaCount: 100                        # Optional. Default: 100
  triggers:
  - type: redis
    metadata:
      address: redis-svc.keda-sample.svc.cluster.local:6379 # ormat must be host:port
      listName: mylist # Required
      listLength: "5" # Required
      enableTLS: "false" # optional
      databaseIndex: "0" # optional

The resulting kubectl get scaledjob ... returns

...
spec:
  failedJobsHistoryLimit: 5
  jobTargetRef:
    activeDeadlineSeconds: 1000
    backoffLimit: 5
    completions: 1
    parallelism: 1
    template:
      metadata: {}
      spec:
        containers:
        - env:
          - name: REDIS_HOST
            value: redis-svc
          - name: REDIS_PORT
            value: "6379"
          - name: REDIS_LIST
            value: mylist
          image: redis-client:v0.0.1
          name: redis-client

I've looked at the api and tried debugging through it but can't seem to find why the metadata isn't being deserialized correctly. If someone points me to the correct location I can try to fix and create the PR.

As a test i manually injected a label and an annotation into the spec of the createJob method, so I can verify that if we can get the data into the ScaledJob spec we can then add it to the pod spec.

Cheers!

@fjmacagno
Copy link

fjmacagno commented Mar 15, 2021

I am using ArgoCD, and i have found that when i try to apply a ScaledJob with a metadata component it thinks that the metadata doen't exist even after a sync.
Specifically, it notices a difference between the stored job and the actual manifest, where the stored one has metadata: {}, but the manifest has metadata populated.

On https://argoproj.github.io/argo-cd/user-guide/diffing/ it suggests that there could be a diff after syncing if

There is a bug in the manifest, where it contains extra/unknown fields from the actual K8s spec. These extra fields would get dropped when querying Kubernetes for the live state, resulting in an OutOfSync status indicating a missing field was detected.

Is it possible that metadata isnt properly included in the manifest spec?

Keda appears to use the k8s job spec template batchv1.JobSpec: https://github.com/mosesyou/keda/blob/main/api/v1alpha1/scaledjob_types.go#L29

I haven't found the source for batchv1.JobSpec, but based on the docs (https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-template) it seems like metadata may not be an option.

Edit: Traced it to
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/batch/v1/types.go#L79
and then to
https://github.com/kubernetes/kubernetes/blob/58814dbe3fc13602fb8a9ac67f9e046c9eb09326/staging/src/k8s.io/api/core/v1/types.go#L3724

It looks like metadata is included properly, so maybe it is a serialization problem?

@julienperignon
Copy link

I am using ArgoCD, and i have found that when i try to apply a ScaledJob with a metadata component it thinks that the metadata doen't exist even after a sync.
Specifically, it notices a difference between the stored job and the actual manifest, where the stored one has metadata: {}, but the manifest has metadata populated.

On https://argoproj.github.io/argo-cd/user-guide/diffing/ it suggests that there could be a diff after syncing if

There is a bug in the manifest, where it contains extra/unknown fields from the actual K8s spec. These extra fields would get dropped when querying Kubernetes for the live state, resulting in an OutOfSync status indicating a missing field was detected.

Is it possible that metadata isnt properly included in the manifest spec?

Keda appears to use the k8s job spec template batchv1.JobSpec: https://github.com/mosesyou/keda/blob/main/api/v1alpha1/scaledjob_types.go#L29

I haven't found the source for batchv1.JobSpec, but based on the docs (https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-template) it seems like metadata may not be an option.

Edit: Traced it to
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/batch/v1/types.go#L79
and then to
https://github.com/kubernetes/kubernetes/blob/58814dbe3fc13602fb8a9ac67f9e046c9eb09326/staging/src/k8s.io/api/core/v1/types.go#L3724

It looks like metadata is included properly, so maybe it is a serialization problem?

@fjmacagno

It should come from the PodSpec type (variable name Template) under the JobSpec

See : https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/batch/v1/types.go#L137
and : https://github.com/kubernetes/kubernetes/blob/58814dbe3fc13602fb8a9ac67f9e046c9eb09326/staging/src/k8s.io/api/core/v1/types.go#L3728

@fjmacagno
Copy link

@zroubalik I am trying to debug this issue locally, but it is my first foray into the KEDA source (and crds and operators and go unfortunately) so i lack context.

I have the local environment working and duplicating the issue; do you have any suggestions for where to begin the search? Is there any sort of post-processing for the crds, object translation, etc. that is a likely culprit?

@fjmacagno
Copy link

Discoveries so far:

  1. If you add a batchv1.ObjectMeta to any other area of the crd, it has the same problem, and is empty.
  2. Copying the batchv1.ObjectMeta definition into scaledjob_types.go and using it instead works fine.

@fjmacagno
Copy link

Found a workaround:

diff --git a/config/crd/bases/keda.sh_scaledjobs.yaml b/config/crd/bases/keda.sh_scaledjobs.yaml
index 105eb47..b626fa3 100644
--- a/config/crd/bases/keda.sh_scaledjobs.yaml
+++ b/config/crd/bases/keda.sh_scaledjobs.yaml
@@ -158,6 +158,7 @@ spec:
                       metadata:
                         description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
                         type: object
+                        x-kubernetes-preserve-unknown-fields: true
                       spec:
                         description: 'Specification of the desired behavior of the
                           pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'

Not a good solution since this file is generated, but will solve it if anyone needs a fix badly.

@fjmacagno
Copy link

Problem is controller-gen is not including anything in metadata from nested contructs: kubernetes-sigs/controller-tools#448

@zroubalik
Copy link
Member

@fjmacagno great job! I was suspecting problems related to this setting.

We can patch the CRD during the generation, same way it is done for #927 here:
https://github.com/kedacore/keda/blob/b9d350145c71fbd4680062aee478660d5025b886/config/crd/patches/scaledjob_patch.yaml

@abhiraialld
Copy link

abhiraialld commented Apr 22, 2024

I have been running into similar issue when I try to run keda on minikube in windows environment. The version of keda installed is 2.13.1 and therefore should have this fix. I am using the below yml file(the relevant part) :
apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: abcd
spec:

However, when I try applying this on my minikube cluster I get the following error :

C:\sampleDocker>kubectl apply -f keda_sclaedjob.yml
Error from server (BadRequest): error when creating "keda_sclaedjob.yml": ScaledJob in version "v1alpha1" cannot be handled as a ScaledJob: strict decoding error: unknown field "spec.metadata"

      Can someone please help here ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to good first issue Good for newcomers help wanted Looking for support from community
Projects
None yet
8 participants