Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variable Docker image tags #51

Closed
bickfordb opened this issue Jun 4, 2018 · 25 comments
Closed

Variable Docker image tags #51

bickfordb opened this issue Jun 4, 2018 · 25 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bickfordb
Copy link

My application has a number of services which use the same Docker image (based on the application repository's GIT SHA). In my application I have a CI workflow like:

  1. Build my app's docker image and push it to a registry with ${APP}:${GIT_COMMIT_SHA}
  2. helm install --upgrade --set GIT_COMMIT_SHA=${GIT_COMMIT_SHA} charts/${APP}

Can you suggest how to make this workflow work with kustomize? It seems like I would have to add a step which would substitute the image (with sed?) in all the templates prior to kustomize build or generate a deployment patch for each service?

@dlorenc
Copy link

dlorenc commented Jun 6, 2018

Why was this closed? I had the same question.

@captncraig
Copy link

I would also like to know the solution to this problem. It seems like a fairly common problem most people will have when upgrading a deployment.

@shotat
Copy link

shotat commented Jul 27, 2018

+1

@Liujingfang1
Copy link
Contributor

kustomize v1.0.5 adds a new feature imageTags, which can allow you to update image tags easily.
Take a look at the example here https://github.com/kubernetes-sigs/kustomize/blob/master/examples/imageTags.md

@hayesgm
Copy link

hayesgm commented Aug 14, 2018

Any follow up here? The image tags command works great, but not necessarily with CI/CD workflows. Specifically, if I build an image in my CI pipeline and then want to deploy that image, I can't get that patch checked back into git (since it was built and tagged by CI, not by a developer). My current solution is to always leave latest in git and in the CI/CD pipeline call:

# build image
docker build -t "my-image:latest" -t "my-image:$(git rev-parse HEAD)" .

# set overlay image tag locally
(cd "k8s/overlays/stage" && kustomize edit set imagetag "my-image:$(git rev-parse HEAD)")

# deploy overlay
kustomize build "k8s/overlays/stage" | kubectl apply -f -

I think we just should form a good story around CI/CD, and this is just one suggestion for that.

@gautaz
Copy link

gautaz commented Jun 3, 2019

@Liujingfang1 The example link is not available anymore on master (here it is from tag v1.0.11).

I guess it is now available here:
https://github.com/kubernetes-sigs/kustomize/blob/master/examples/image.md

Following what @hayesgm said, I also do not understand how to use this to tag a Docker image with a git tag in an automated process where you consider the git tags as your source of truth. But from what I understand, this particular point doesn't seem to be closely related to the OP request. As I am quite new to kustomize, I am most certainly missing something, can you detail this a bit more?

@tsloughter
Copy link

@bickfordb why did you close this :). As far as I can tell there is still not a solution.

@tsloughter
Copy link

I suppose using set image isn't too bad and I see that it is purposely not possible to set the image tag based on the environment when building the resources, https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md#build-time-side-effects-from-cli-args-or-env-variables

But an issue I have with set image is not being able to tell it where kustomization.yaml? Why does edit not take a path just like build?

@jroper
Copy link

jroper commented Dec 10, 2019

Agree that this is very clunky from a CD perspective, editing files that are already checked in to git as part of a deploy process feels so wrong.

@jroper
Copy link

jroper commented Dec 10, 2019

What would be nice is if the kustomize maintainers actually published some best practices for how to do this in the real world - do the maintainers seriously expect us to write scripts that leave the git repo in a modified with unchecked changes state after doing a deploy? I've always been taught that that's a bad practice. The solution I have with my scripts at the moment is to create a kustomization-template.yaml, and the first thing my script does is cp kustomization-template.yaml kustomization.yaml, and kustomization.yaml is in .gitignore. Is this the type of workflow that the kustomize maintainers expect us to use? If so, perhaps document it. If not, perhaps suggest an alternate one, and document it.

@tkellen
Copy link
Contributor

tkellen commented Dec 10, 2019

Our project uses something like this for deploying services we've built and control the entire config for:

# render a kubernetes manifest in any namespace
set -euo pipefail

function usage {
  cat <<EOF
Usage: render-manifest <path-to-kustomization-file>

Render a kubernetes manifest in the desired namespace, and, if applicable, tag
the container being deployed. Ideally this command wouldn't exist but for now
it is useful for papering over functionality missing from our kubernetes config
management tool kustomize.

Example Usage:
  render-manifest projects/shared/deployments/ephemeral
  (cd projects/service-echo && DEPLOYMENT_NAME=dev make manifest)
EOF
}

if [[ $# -eq "0" ]]; then
  usage
  exit 1
fi

# Get the absolute path to this script so we can be sure the git commands we
# run are executed in the repo regardless of where the caller is.
SCRIPT_PATH="$(cd "$(dirname "$BASH_SOURCE[0]")"; pwd -P)"

# Ensure manifest path has a kustomziation.yml file in it.
if [[ ! -f ${1}/kustomization.yml ]]; then
  printf "No kustomization.yml found in ${1}\n"
  exit 1
fi

# Get absolute path to manifest file.
MANIFEST_PATH=${PWD}/${1}

# Generate a temporary directory to hold our generated kustomization.yml and
# do our best to clean up after ourselves. This could go away if kustomize
# allowed building a manifest provided on stdin.
TEMP_DIR=$(mktemp -d)
KUSTOMIZATION_FILE="${TEMP_DIR}/kustomization.yml"
trap "exit 1" HUP INT PIPE QUIT TERM
trap 'rm -rf "${TEMP_DIR}"' EXIT

# Determine the current branch no matter where this script is executed from.
BRANCH_NAME=${BRANCH_NAME:-"$(cd "${SCRIPT_PATH}" && git rev-parse --abbrev-ref HEAD)"}

# Determine which namespace we'll deploy to, defaulting to current branch name.
NAMESPACE=${NAMESPACE:-${BRANCH_NAME}}

# Generate a simple kustomization file with a namespace and one resource that
# points back to the manifest we're trying to control. We could use the command
# `kustomize edit set namespace ...` to modify the requested manifest in place
# but it would change the file in the repo. This doesn't matter in CI because we
# don't check it in, but it does matter for local development troubleshooting
# where this sort of inline editing always results in accidental commits.
cat <<EOF > ${KUSTOMIZATION_FILE}
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ${NAMESPACE}
resources:
- ../..${MANIFEST_PATH}
EOF

# Ensure container is tagged if IMAGE/TAG are provided. As before, we could use
# `kustomize edit set image ...` but the same concerns about accidental commits
# are present.
if [[ ${IMAGE:-} != "" && ${TAG:-} != "" ]]; then
  cat <<EOF >> ${KUSTOMIZATION_FILE}
images:
- name: container
  newName: ${IMAGE}
  newTag: ${TAG}
EOF
fi

>&2 printf "Rendering:\n$(cat ${KUSTOMIZATION_FILE} | sed 's/^/  /g')\n\n"

# Ensure the namespace we're trying to deploy into exists.
cat <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: ${NAMESPACE}
---
EOF
# Output our manifest.
kustomize build ${TEMP_DIR}

Will be adding envsubst to this soon as well to paper over how broken variables are.

@tsloughter
Copy link

I created a transformer plugin https://github.com/tsloughter/kustomize-git-ref-transformer

@melissachang
Copy link

@bickfordb Can you please reopen this? (I can't reopen myself.)

@bickfordb bickfordb reopened this Dec 17, 2019
@arashbi
Copy link

arashbi commented Dec 20, 2019

Does anyone knows how skaffold does it? When I run skaffold with Kustomize, it always deploys the latest image, regardless of what kustomize specifies.

@ghostbar
Copy link

Quick hack (I would only recommend this for CI/CD on development environments):

In the image tag use something unique, like here-is-image-tag and then just:

kustomize build dir/overlays/dev | \
  sed 's/:here-is-image-tag/:${MY_GIT_SHA}/' | \
  kubectl -n dev apply -f -

Yeah, you're using templates to avoid this hackity hack, but kustomize is for GitOps and if you can't git, then you're gonna end up using a little bit of sed.

For production releases makes sense to follow kustomize edit set image instead of this.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 23, 2020
@hayesgm
Copy link

hayesgm commented Apr 24, 2020

For what it's worth, we've stopped using kustomize as this issue has been left as a won't fix basically since the inception of this project.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 24, 2020
@iamnoah
Copy link

iamnoah commented May 24, 2020

Just came to unsubscribe because I think the complaints here are a bit silly. set image is about the simplest solution you could ask for if you want to dynamically update the image. The complaints here among to “its not a one-liner.” Write your own two line script to do it then!

As for “best practices,” if you can’t articulate why it is a problem, you are just stating your preference.

If you are doing GitOps, you never needed it, change the image and commit it.

@tsloughter
Copy link

I guess iamnoah won't see this but the issue isn't that it isn't a one liner, its that it changes the committed files.

If I push a tag to git but then my CI modifies the content (because it runs set image) I am no longer deploying that tag but a modified version of it that is only visible from the CI run.

Which is why I created https://github.com/tsloughter/kustomize-git-ref-transformer and not a "one-liner" script, but would prefer not having to install a transformer because plugins don't seem that stable of a feature yet.

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@evolutics
Copy link

evolutics commented Apr 16, 2022

Update: As @TLmaK0 mentioned, the following relies on a behavior that has turned out to be a bug (see #4731). Thus, I recommend an alternative. Skaffold for example solves this problem but also generally improves Kubernetes workflows.

If this helps anyone: it is possible to use an image tag from an environment variable, without having to edit files for each different tag. This is useful if your image tag needs to vary without changing version-controlled files.

Standard kubectl is enough for this purpose – no need for envsubst, sed, templating, etc. In short, use a configMapGenerator with data populated from environment variables. Then add replacements that refer to this ConfigMap data to replace relevant image tags.

An example kustomization.yaml file could look like so:

# Generate a ConfigMap based on the environment variables in the file `.env`.
configMapGenerator:
  - name: my-config-map
    envs:
      - .env

replacements:
  - source:
      # Replace any matches by the value of environment variable `MY_IMAGE_TAG`.
      kind: ConfigMap
      name: my-config-map
      fieldPath: data.MY_IMAGE_TAG
    targets:
      - select:
          # In each Deployment resource …
          kind: Deployment
        fieldPaths:
          # … match the image of `my-container` …
          - spec.template.spec.containers.[name=my-container].image
        options:
          # … but replace only the second part (image tag) when split by ":".
          delimiter: ":"
          index: 1

resources:
  - deployment.yaml

In the same folder, you need a file .env with the environment variable name only (note: just the name, no value assigned):

MY_IMAGE_TAG

File deployment.yaml in the same folder:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  selector:
    matchLabels:
      app: my-pod
  template:
    metadata:
      labels:
        app: my-pod
    spec:
      containers:
        - name: my-container
          # Tag `placeholder` is arbitrary as long as it does not contain a ":".
          image: nigelpoulton/k8sbook:placeholder

Now MY_IMAGE_TAG from the local environment is integrated as the image tag when running kubectl kustomize, kubectl apply --kustomize, etc.

Demo:

MY_IMAGE_TAG=2.0 kubectl kustomize .

This prints the generated image tag, which is 2.0 as desired:

#
spec:
  #
  template:
    #
    spec:
      containers:
        - image: nigelpoulton/k8sbook:2.0
          name: my-container

@mdrakiburrahman
Copy link

In case this helps others, there's a nice query-like way to do this that's more resilient, using yq:

Here's a Kubernetes Job I want to update the imageTag for, and the Kustomize output contains a bunch of other resources inline:

# other K8s resources
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
...
---
apiVersion: batch/v1
kind: Job
metadata:
  name: arc-ci-launcher
  ...
spec:
  template:
    spec:
     ...
      containers:
      - name: arc-ci-launcher
        # Placeholder: will be overwritten via yq
        image: arc-ci-launcher:latest

I want to change arc-ci-launcher:latest to content within LAUNCHER_IMAGE_OVERRIDE and apply it without changing directories:

# Apply on overlay folder as you normally would
kubectl kustomize ${KUSTOMIZE}/overlays/${KUBERNETES_ENVIRONMENT} |
    yq e ". | select(.kind == \"Job\") as \$job | select(.kind != \"Job\") as \$other | \$job.spec.template.spec.containers[0].image = \"${LAUNCHER_IMAGE_OVERRIDE}\" | (\$job, \$other)" |
    kubectl --dry-run=client apply -f -

# ...
# clusterrolebinding.rbac.authorization.k8s.io/arc-ci-launcher created (dry run)
# job.batch/arc-ci-launcher created (dry run)

All works out fine, and yq's querying surface area is super-robust. If you know a little bit of sql the query above is pretty readable. You should be able to drill down further and select Job==name as well via yq.

@TLmaK0
Copy link

TLmaK0 commented Apr 5, 2024

@evolutics solution does not work anymore because of this: #4731

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests