Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider/document how to use Jobs with Kustomize #168

Closed
paultiplady opened this issue Jul 15, 2018 · 19 comments
Closed

Consider/document how to use Jobs with Kustomize #168

paultiplady opened this issue Jul 15, 2018 · 19 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@paultiplady
Copy link

As part of some CI/CD steps it's necessary to run a one-off job from inside the cluster. An example from my pipeline would be running a Django database migration before upgrading the code in staging/production.

There are a few ways of achieving this that I've come across;

  1. as an initContainer
  2. as a k8s Job
  3. using kubectl run

I've tried these in Kustomize, and getting k8s Jobs to play nice is a little challenging.

These Jobs need to consume Secrets/Configmaps that are created and hashed by Kustomize, so I think they need to be generated by Kustomize too.

Since a Job name must be unique, unless I have some way of transforming the Job name into something unique, I can't re-run a job inside the same namespace (e.g. I want to apply new code in staging => re-run migration job). A possible workaround here would be to always try to do a kubectl delete job myjob before applying the new kustomize build output, but then I lose the old job history.

In some cases (per-branch / per-commit / per-tag jobs) It might be enough to do name hashing based on the contents of the Job. I could easily imagine wanting to run a job on every apply though, i.e. requiring fully randomized Job names.

I don't see any issues/discussion on this subject; what's the current best-practice on this subject?

@Liujingfang1
Copy link
Contributor

@paultiplady I'm trying to understand your problem. You wan to run some CI jobs and you want every job to have a different name. You can try this. You can put Secrets/Configmaps and your job into the same kustomization. Then for every job you need to run, you can add a different namePrefix, maybe the commit hash. You can do this by kustomize edit set nameprefix <commit hash>. Then kustomize build . | kubectl apply -f -. Then the jobs will have different names.

@Liujingfang1 Liujingfang1 self-assigned this Aug 20, 2018
@lswith
Copy link
Contributor

lswith commented Aug 21, 2018

Here is the context for myself and possibly others:

I have a CI/CD pipeline that requires database migrations to run before a deployment begins. The database migration needs to exit with a non-zero status before the deployment can start its rollout.

I am currently using Kubernetes Jobs to run the migration beforehand, checking the result and then setting a deployment's image if it succeeds.

I understand coupling migrations to deployments is bad practice but in general, there are many times where I need to couple a kubernetes Job to a deployment rollout. It's not useful to do an initContainer because an initContainer is per pod. I need an initJob that can run before a deployment rollout.

Obviously this scope is quite large but it would be nice to use kustomize to generate a job per deployment similar to what is being done for configMaps.

@Liujingfang1
Copy link
Contributor

@paultiplady @lswith There is a proposal to handle your database migration before rolling out a deployment kubernetes/community#1171. Before these hooks are available, continuing using Kubernetes Job is a good approach.

Now how could Kustomize help with this? I don't think kustomize can handle the whole thing all by itself. Coupling Kustomize with some scripts could definitely help. For example, you can try separate your configs into two kustomization. One is for the Job running the database migration. The other is for the deployment itself. For any ConfigMaps and Secrets that are shared by the Job and deployment, put them into a common base. Then the script need to kubectl apply the Job kustomization and watch the status. Once it succeeds, the script need to kubectl apply the other kustomization.

@Liujingfang1
Copy link
Contributor

@paultiplady @lswith Have you got a chance to look at and try the approach of two kustomizations. Any suggestions or comments? If it looks good, we will document this approach.

@lswith
Copy link
Contributor

lswith commented Aug 23, 2018

I am currently using the approach of a bash script which handles running a job before generating the deployment. I've seperated it into 2 kustomizations.

@burdiyan
Copy link
Contributor

burdiyan commented Sep 6, 2018

I would suggest to try running these one off jobs in bare pods without any controller. I haven’t tried it but seems like it may be a solution.

@DeadZen
Copy link

DeadZen commented Nov 11, 2018

You have even more issues if you need to run multiple jobs in succession and that if a migration step fails, kubernetes will just keep trying to rerun it, it seems the solution to each issue is to just create a deployment with a replica count of 1 and when its done successfully, deploy. Is there an effective alternative?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 26, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@silviogutierrez
Copy link

@Liujingfang1: I think the simplest thing here would be to be able to have first class support to set a jobSuffix and be able to edit it with kustomize edit set

Right now, my release script (that is, script that creates a release not the deployment) does this:

cd $PROJECT_ROOT/containers/release/base
kustomize edit set image joy-client=$CLIENT_IMAGE:$VERSION_TAG
kustomize edit set image joy-server=$SERVER_IMAGE:$VERSION_TAG

Base has everything of mine, including the migration job. If we had a simple command to kustomize jobSuffix or something of the like, then I could just also call, right below:

kustomize edit set jobSuffix migrations=$VERSION_TAG

Or something like that. Thus updating my kustomization.yaml nicely and letting my deploy script stay as simple as:

kustomize build containers/release/overlays/production | kubectl apply -f -

No waiting, or anything else.

@jamshid
Copy link

jamshid commented Mar 29, 2020

I'm surprised k8s / kustomize don't seem to have a good way to accomplish something like a one-off db migration. Seems that could have been handled pretty well with a docker-compose run.

@matti
Copy link
Contributor

matti commented Mar 31, 2020

workaround: #903 (comment)

@stefanv
Copy link

stefanv commented Apr 23, 2020

The workaround suggested in the comment above is to delete the old job, and to then re-run the job. That works, but ideally you want to simply create a new job, with a name like job-name-[timestamp] or job-name-[iteration]. I.e., kustomize could help to provide simple templating support, so that a unique job id is generated for each invocation.

Given that Kustomize markets itself as "template-free configuration", I suppose this would be considered out of scope.

@jcmcken
Copy link

jcmcken commented Feb 27, 2023

/reopen

@k8s-ci-robot
Copy link
Contributor

@jcmcken: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@strowk
Copy link

strowk commented Oct 25, 2023

You can ask k8s to remove job after it is done using ttlSecondsAfterFinished , see:
https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically

This allows to run the same job on every kustomize apply, so far as you don't do it more often than the specified ttl. I set 30 seconds, because for me now it is enough to read logs during development and I don't deploy more often than that.

With kustomize magic you can set value to 0 for your other environments where you do not control how often deployments happen. Zero results in immediate removal in my test with k3d.

The downside is that troubleshooting might be harder in some situations, but if you collect logs and events from pods centrally on your cloud environments, it is workaroundable.

@lopezchr
Copy link

lopezchr commented Feb 7, 2024

I'm using kustomize with argocd and I need to specify the changes into the kustomization files. It will be fine if we could specify a sufix like kustomize uses with config maps or secrets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests