Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inject Host into Ingress #347

Open
lswith opened this issue Sep 13, 2018 · 63 comments
Open

Inject Host into Ingress #347

lswith opened this issue Sep 13, 2018 · 63 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@lswith
Copy link
Contributor

lswith commented Sep 13, 2018

I think a reasonably common use case is to swap an ingress's host value:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: helloworld
spec:
  rules:
# This value should be editable
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: helloworld
          servicePort: 80

Can we get a feature to set this?

@Liujingfang1
Copy link
Contributor

Recently we added JSON patch support, which is a good solution for this problem. Take a look at our example https://github.com/kubernetes-sigs/kustomize/blob/master/examples/jsonpatch.md
This feature is available from HEAD currently, we will release a new version soon.

@lswith
Copy link
Contributor Author

lswith commented Sep 13, 2018

ah right. Just create a JSON Patch and then use that to edit the build.

@lswith lswith closed this as completed Sep 13, 2018
@iamwel
Copy link

iamwel commented May 25, 2019

I'm sorry @Liujingfang1, I read the example, and it does not seem like a suitable solution to what is, as @lswith mentioned, a common use case. I was thinking of incorporating Kustomize into our workflow as a low-overhead alternative to creating a helm chart, but a chart seems to be a much more elegant alternative at this point. Any opprotunity for native ingress variables in Kustomize?

@jonathanunderwood
Copy link

I agree: being able to patch the Ingress host value is super useful, and it would be preferable to be able to do it with a strategic merge. I am seeing a lot of feature requests closed with "use a JSON patch", without much consideration of the use cases.

@davinkevin
Copy link

Same for me... 👍Could we reopen this one?

davinkevin added a commit to davinkevin/Podcast-Server that referenced this issue Sep 1, 2019
Due to issue in kustomize (kubernetes-sigs/kustomize#347), I ducplicate the whole ingress in the kserver customization.
@streetmapp
Copy link

Also commenting in hopes that this get looked at as something that should be supported natively. Pushing jsonpatches as the solution doesn't seem viable for all use cases. For obscure things that aren't done often sure. But configuring an ingress is quite common, so having a cleaner way to kustomize that would be extremely beneficial.

@iameli
Copy link

iameli commented Jan 8, 2020

@davinkevin's referenced commit (davinkevin/Podcast-Server@9ca4be5) illustrates the problem very nicely — how do I make three different variants with three different ingress rules applying to three different hosts? Here's how I'm currently solving the problem — can y'all see how this is inelegant?

Here's my base:

broadcaster/broadcaster.yaml

[deployment and service omitted]

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: broadcaster
spec:
  rules:
    - host: $(SERVICE_NAME).example.com
      http:
        paths:
          - path: /
            backend:
              serviceName: broadcaster
              servicePort: 80

broadcaster/kustomization.yaml

resources:
  - broadcaster.yaml

And here's what I'm using to produce ingresses for broadcaster-bulbasaur.example.com, broadcaster-charmander.example.com, and broadcaster-squirtle.example.com.

broadcaster-pokemon/kustomization.yaml

resources:
- ./bulbasaur
- ./charmander
- ./squirtle

broadcaster-pokemon/squirtle/kustomization.yaml

resources:
- ../../broadcaster
nameSuffix: -squirtle
patchesJson6902:
  - path: hostname.yaml
    target:
      group: extensions
      kind: Ingress
      name: broadcaster
      version: v1beta1

broadcaster-pokemon/squirtle/hostname.yaml

- op: replace
  path: /spec/rules/0/host
  value: broadcaster-squirtle.example.com

broadcaster-pokemon/charmander/kustomization.yaml

resources:
- ../../broadcaster
nameSuffix: -charmander
patchesJson6902:
  - path: hostname.yaml
    target:
      group: extensions
      kind: Ingress
      name: broadcaster
      version: v1beta1

broadcaster-pokemon/charmander/hostname.yaml

- op: replace
  path: /spec/rules/0/host
  value: broadcaster-charmander.example.com

broadcaster-pokemon/bulbasaur/kustomization.yaml

resources:
- ../../broadcaster
nameSuffix: -bulbasaur
patchesJson6902:
  - path: hostname.yaml
    target:
      group: extensions
      kind: Ingress
      name: broadcaster
      version: v1beta1

broadcaster-pokemon/bulbasaur/hostname.yaml

- op: replace
  path: /spec/rules/0/host
  value: broadcaster-bulbasaur.example.com

I'd like to do something like this instead:

broadcaster/kustomization.yaml

resources:
  - broadcaster.yaml
vars:
  - name: SERVICE_NAME
    objref:
      kind: Service
      name: broadcaster
      apiVersion: v1

broadcaster-pokemon/squirtle/kustomization.yaml

resources:
- ../../broadcaster
nameSuffix: -squirtle

broadcaster-pokemon/charmander/kustomization.yaml

resources:
- ../../broadcaster
nameSuffix: -charmander

broadcaster-pokemon/bulbasaur/kustomization.yaml

resources:
- ../../broadcaster
nameSuffix: -bulbasaur

Much cleaner.

@xhanin
Copy link

xhanin commented Jun 13, 2020

Any chance to see that addressed in a future version?

When you have a bunch of subdomains in an ingress, using json patch is not acceptable. It works, but referencing the hosts by index leads to odd errors if someone changes the order of the hosts in the original ingress. So having something like we have for the images would be so nice...

In my team we ended up piping kustomize build with a sed to address that more conveniently. But it's so sad something easy is not supported out of the box for such a common use case.

@andsens
Copy link

andsens commented Jun 20, 2020

I ran into a different use-case for the same feature yesterday:
At work we are going to have a local k8s setup on each machine. With the old VM setup we customize the hostname to have a $USER suffix. That hostname is then broadcast via mDNS on the internal network so people other than the person at the machine can test solutions from their own machine (i.e. instead of solution.local you'd have solution-$USER.local).
I am working on an mDNS ingress hostname broadcaster (currently only works with microk8s), and being able to locally apply various manifests containing ingresses without having to post-process them would be very helpful.

@coderanger
Copy link

I empathize with the Kustomize team, maybe this could be addressed in k/k with something like a spec.baseHostname field?

@esteban1983cl
Copy link

Please add support for suffixes.
For instance, I have a lot of ingress with hostname suffix .aws-test.example.com I need add an overlay for different environments or zones for get ingresses with hostname suffix .aws.example.com or .gcp.example.com

@dwmkerr
Copy link

dwmkerr commented Aug 20, 2020

Without features like this, being able to essentially string interpolate on fields, I'm not really sure how Kustomize really fits into the ecosystem. For me I was using it because Helm is way too complex for simple projects, Kustomize covers 99% of my needs - except that I can't configure hostnames of ingress routes.

I know there is a design goal not to make this a templating project, but without some kind of basic templating/interpolation, does this not greatly limit the potential use cases?

JSON Patch - clever, but grody for simple use cases.

@andsens
Copy link

andsens commented Sep 4, 2020

OK, I rescind my +1 on this. Using patches my specific usecase is actually very easy to solve by templating the patch files and creating a templated kustomize overlay (using ansible):
ingress-patch.json.j2

[
	{
		"op": "replace",
		"path": "/spec/tls/0/hosts/0",
		"value": "{{ ingress.name }}-{{ ansible_env.USER }}.local"
	},
	{
		"op": "replace",
		"path": "/spec/rules/0/host",
		"value": "{{ ingress.name }}-{{ ansible_env.USER }}.local"
	}
]

kustomize.yaml.j2

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
{%- for ingress in operations.ingresses %}

- path: {{ ingress.name }}-ingress-patch.json
  target:
    group: networking.k8s.io
    version: v1beta1
    kind: Ingress
    name: {{ ingress.name }}
{%- endfor %}

I'm also beginning to think that implementing something like this in kustomize would erode some of its simplicity that I have grown really fond of.

@MichaelJCole
Copy link

MichaelJCole commented Nov 7, 2020

Ugh, I'm on my first day of Kustomize and foiled by this fundamental challenge. I have different domains for each environment. This seems like a basic use case.

Options:

  1. Copy/paste the entire file and move on.
  2. patchjson using paths like path: /spec/rules/0/host which seems very fragile
  3. kustomize vars but they seem to only reference other strings. Is it possible to use a constant? If not, why not?

If there is a "bug" here, is it that the elements of "rules" don't have names, so they can't be strategically merged, breaking a basic use case for reusing code with different domains?

Is there some other solution I'm missing?

# SEE: https://kubernetes.io/docs/concepts/services-networking/ingress/

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-xxx.com
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: clusterissuer-selfsigned
spec:
  tls:
  - hosts:
    - xxx.xxx.team
    - app.xxx.team
    - www.xxx.team
    - xxx.team
    secretName: tls-xxx
  defaultBackend:
    service:
      name: www
      port:
        number: 80
  rules:
  - host: xxx.xxx.team
    http:
      paths:
      - backend:
          service:
            name: echo1
            port:
              number: 80
  - host: app.xxx.team
    http:
      paths:
      - backend:
          service:
            name: echo2
            port:
              number: 80

@andsens
Copy link

andsens commented Nov 7, 2020

@MichaelJCole the patches are just as stable as the Ingress API itself, so that shouldn't be any trouble.
I actually ended up creating a kustomize transformer instead (transformers and generators are awesome btw.).
That way an additional patch overlay is not needed.
I'm sure it can be modified to fit your usecase.

#!/usr/bin/env python3
"""IngressTransformer - Modify ingress domain names according to a template
Usage:
  IngressTransformer <config-path>
Template pattern:
  The template supports the following variables:
  {_TLD} last part of the domain name
  {_HOSTNAME} everything except the TLD
  {_FQDN} the entire domain name
  {...} any environment variable
"""
import docopt
import yaml
import os
import sys

def main():
  params = docopt.docopt(__doc__)
  config = yaml.load(open(params['<config-path>']), Loader=yaml.FullLoader)
  template = config['spec']['template']
  resources = yaml.load_all(sys.stdin, Loader=yaml.FullLoader)
  ingresses = []
  for resource in resources:
    if resource['apiVersion'] in ['networking.k8s.io/v1', 'networking.k8s.io/v1beta1'] \
      and resource['kind'] == 'Ingress':
      for entry in resource['spec']['tls']:
        for idx, domain in enumerate(entry['hosts']):
          entry['hosts'][idx] = transform_host(domain, template)
      for rule in resource['spec']['rules']:
        rule['host'] = transform_host(domain, template)
      ingresses.append(resource)
  sys.stdout.write(yaml.dump_all(ingresses))


def transform_host(domain, template):
  parts = domain.split('.')
  return template.format(**{
    **os.environ,
    '_TLD': parts[-1],
    '_HOSTNAME': '.'.join(parts[0:-1]),
    '_FQDN': '.'.join(parts),
  })

if __name__ == '__main__':
  main()

Place it in ~/.config/kustomize/plugin/APINAME/ingressdomaintransformer and put a config next to your ingress yaml:

---
apiVersion: APINAME
kind: IngressDomainTransformer
metadata:
  name: username-suffix
spec:
  template: '{_HOSTNAME}-{USER}.{_TLD}'

@Shell32-Natsu Shell32-Natsu added kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 9, 2020
@Shell32-Natsu
Copy link
Contributor

Shell32-Natsu commented Nov 9, 2020

As @andsens mentioned, the most flexible way to do any operation in kustomize is writing your own transformer. Meanwhile, kustomize now supports KRM functions as transformer. KRM function is containerized so it will be easier to reuse. Although we are in a lack of documentation about this feature, you can get some example from the test codes.

@Shell32-Natsu Shell32-Natsu added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 9, 2020
@marcelser
Copy link

FYI @andsens. The above example of a python3 transformer is badly broken. I tried to adapt it to my needs. First the docopt is not valid (it always errors out with latest docopt 0.6.2) in the end I had to remove "Template Pattern:" block completely. Then it doesn't check if the 'tls' or 'rules' key really exist and errors out if one of them is missing. And in case of 'rules' you're passing the undefined value of domain which is only populated in for loop for 'tls' but not for 'rule'. so I had to exchange 'domain' with rule['host'].

and lastly you forget the part that you have to add:

transformers:
- <your config file>.yaml

to the kustomization.yaml to make it work.

And last but not least, you might want to add this before calling main() or as first thing in main() as I was running this in a docker image of CentOS7 and had PyYAML issues with invalid characters as the input stream was not utf-8. Note that this requires Python 3.7 or higher:

sys.stdin.reconfigure(encoding='utf-8')
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')

@andsens
Copy link

andsens commented Jan 6, 2021

@marcelser thank you for your notes. Indeed the for loop is badly broken. The docopt works for me, but I can see that I have an additional newline before the "Template patterns:", and testing it on try.docopt.org indeed confirms that the newline is needed.

Thanks for the tip about utf-8, that'll definitely come in handy.

@atmosx
Copy link

atmosx commented Feb 20, 2021

I'd like to be able to dynamically add/remove/edit hosts during deployments. I need to deploy one namespace and ingress per branch, all other things being equal. IMO adding ingresses dynamically should as easy as adding labels, e.g.:

kustomize edit set/add/remove ingress test.xmpl.com

My use case is simple so I can reliably use sed and/or envsubst, but I find it strange that the ability to modify an ingress is missing.

@ant31
Copy link

ant31 commented Mar 27, 2021

If we are pushed to use workaround / templating either via sed or external transformers, it's most likely something to add natively to kustomize.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot
Copy link
Contributor

@cprivitere: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Jan 10, 2023
@k8s-ci-robot
Copy link
Contributor

@lswith: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 10, 2023
@natasha41575 natasha41575 removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 18, 2023
@cprivitere
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 19, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 18, 2023
@kragniz
Copy link

kragniz commented Jul 18, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 18, 2023
@SGStino
Copy link

SGStino commented Aug 21, 2023

I guess this should be something similar to the images, which are, in some way, also URIs, you define your ingress with a placeholder host service1.myproject.local
and then have a kustomization with:

Prod:

hosts:
  - name: service1.myproject.local
    newName: myservice1.company.com
  - name: service2.myproject.local
    newName: myservice2.company.com

Test:

hosts:
  - name: service1.myproject.local
    newName: myservice1.test.local
  - name: service2.myproject.local
    newName: myservice2.test.local

this should update all host references, not just ingress routes, but also gateway apis, and preferrably also the tls hosts

this is certainly more stable than having a json patch replace the xth, yth and zth element of an array.

And if there really needs to be an update to something else, like a crd, the images transformer implementation already has a mechanism for that with configurations (https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/images/kustomization.yaml)

It could even be that #3492/#3737 is enough, but that it's adoption is hindered by the lack of documentation?

This is where i'm getting stuck at being able to use the replacements transformer for ingress hosts:

replacements:
  - source: 
      kind: Ingress
      fieldPaths:
      - spec.rules.*.host
      - spec.tls.*.hosts.*
      # what goes here to specify which host I want to replace?
    target:
      # what goes here to set a specific value?

If a ReplacementTransformer won't do as-is, shouldn't we propose a modification to it, instead of going forward with an entire new transformer for now?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2024
@kragniz
Copy link

kragniz commented Jan 26, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2024
@xhanin
Copy link

xhanin commented Feb 21, 2024

Kyverno seems to provide a solution for that:
https://kyverno.io/policies/other/replace-ingress-hosts/replace-ingress-hosts/

That may be a good workaround to the lack of support in kustomize

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2024
@kragniz
Copy link

kragniz commented Jun 4, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2024
@pfyod
Copy link

pfyod commented Jun 4, 2024

What about just adding a new directive that allows one to override arbitrary merge keys, something like:

Base:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
  - host: placeholder

Patch:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
  - $mergeKey: placeholder
    host: example.com

@SGStino
Copy link

SGStino commented Jun 8, 2024

i believed it should be possible with the replacements, but i never got to figure out how they worked exactly. Maybe it wasn't really designed for it and it's still lacking a simple tweak?

I wrote this from what i could find in the ReplaceTransformer issues, but couldn't find how to tell it what to replacE.

replacements:
  - source: 
      kind: Ingress
      fieldPaths:
      - spec.rules.*.host
      - spec.tls.*.hosts.*
      # what goes here to specify which host I want to replace?
    target:
      # what goes here to set a specific value?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 6, 2024
@kragniz
Copy link

kragniz commented Sep 6, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests