Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS alb + Ingress + k8s 1.22 #8067

Open
rdeteix opened this issue Dec 30, 2021 · 12 comments
Open

AWS alb + Ingress + k8s 1.22 #8067

rdeteix opened this issue Dec 30, 2021 · 12 comments
Labels
bug Something isn't working

Comments

@rdeteix
Copy link

rdeteix commented Dec 30, 2021

Describe the bug

I tried to follow the "AWS Application Load Balancers (ALBs) And Classic ELB (HTTP Mode)" tutorial, it is not compatible with k8s 1.22. I've changed the configuration and it is still note working.

To Reproduce
I used the latest stable argo-cd.
Service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 #This tells AWS to send traffic from the ALB using HTTP2. Can use GRPC as well if you want to leverage GRPC specific features
  labels:
    app: argogrpc
  name: argogrpc
  namespace: argocd
spec:
  ports:
  - name: "443"
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server
  sessionAffinity: None
  type: NodePort

Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
    # Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
    alb.ingress.kubernetes.io/conditions.argogrpc: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
  name: argocd
  namespace: argocd
spec:
  rules:
  - host: argocd.argoproj.io
    http:
      paths:
      - backend:
          service:
            name: argogrpc
            port:
              number: 443
        pathType: ImplementationSpecific
      - backend:
          service:
            name: argocd-server
            port:
              number: 443
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - argocd.argoproj.io

Expected behavior
I don't see any load balancer created.

Screenshots

If applicable, add screenshots to help explain your problem.

Version

Paste the output from `argocd version` here.

Logs

Paste any relevant application logs here.
@rdeteix rdeteix added the bug Something isn't working label Dec 30, 2021
@sushilalert
Copy link

Same issue with k8s 1.21 AWS alb ingress

error: error validating "argocdingresstrafic.yml": error validating data: [Valid.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): up.paths[1].backend): unknown field "serviceName" in io.k8s.api.networking.v1.Ing.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false

@albertfrates
Copy link

This issue can be fixed by reformatting your ingress service spec to:

`spec:
rules:

  • host: argocd.argoproj.io
    http:
    paths:
    • pathType: ImplementationSpecific
      backend:
      service:
      name: argogrpc
      port:
      number: 80
    • pathType: ImplementationSpecific
      backend:
      service:
      name: argocd-server
      port:
      number: 80 `

@OliverLeighC
Copy link

I'm having this same issue, even after updating the spec to match the new format:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
    alb.ingress.kubernetes.io/conditions.argogrpc: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
  name: argocd
  namespace: argocd
spec:
  rules:
    - host: argocd.argoproj.io
      http:
        paths:
          - path: /*
            backend:
              service:
                name: argogrpc
                port:
                  number: 443
            pathType: ImplementationSpecific
          - path: /*
            backend:
              service:
                name: argocd-server
                port:
                  number: 443
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - argocd.argoproj.io

The ingress is created sucessfully but no alb loadbalancer or controller is created and the ingress has no address (should be the address of the alb loabalancer)

Here is my service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
  labels:
    app: argogrpc
  name: argogrpc
  namespace: argocd
spec:
  ports:
    - name: "443"
      port: 443
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server
  sessionAffinity: None
  type: NodePort

@lifeofmoo
Copy link

lifeofmoo commented Mar 25, 2022

Hello, I am also seeing issues when I follow the docs.

I can only get my ingress to fully deploy AND serve traffic (i.e register the argo-server pod) to successfully register the pod to the two target groups only when I set:

target-type: ip - (ingress file)
type: ClusterIP - argogrpc service file

What aws load balancer controller version are you running? I had to rewrite ALL my ingress files very recently when I moved from 2.1.3 to 2.4.0. The big gotcha for me is that the following is now deprecated and should be replaced with this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: alb < appreciate this isn't in the argo docs so shouldn't be used anyway :)

However it is good for everyone to know it's now deprecated` in favour for the below!

  name: argocd
  namespace: argocd
spec:
  ingressClassName: alb <<< THIS needs to exist - unless you do EXTRA config to make alb the default. which is out of scope for this discussion.
  rules:`

Full thread details are on this slack thread!

@OliverLeighC
Copy link

In case anyone else is coming here with the same problem, the solutions for me was to patch the argocd-server to use type: NodePort instead of the default ClusterIP and to make sure my alb-controller was on the latest version

updated files for reference:

overlays/argocd-server.yaml

apiVersion: v1
kind: Service
metadata:
  name: argocd-server
  labels:
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/name: argocd-server
    app.kubernetes.io/component: server
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server

base/argogrpc.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
  labels:
    app: argogrpc
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/name: argogrpc
    app.kubernetes.io/component: server
  name: argogrpc
  namespace: argocd
spec:
  ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server
  type: NodePort

ingress-class.yaml

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: alb-default
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: ingress.k8s.aws/alb

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
    alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
    alb.ingress.kubernetes.io/certificate-arn: xxxxx.xxxx
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: instance
    alb.ingress.kubernetes.io/tags: app=argo
    alb.ingress.kubernetes.io/conditions.argogrpc: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
  name: argocd
  namespace: argocd
spec:
  ingressClassName: alb-default
  rules:
    - host: argocd.xxx.xxx
      http:
        paths:
          - path: /*
            backend:
              service:
                name: argogrpc
                port:
                  number: 443
            pathType: ImplementationSpecific
          - path: /*
            backend:
              service:
                name: argocd-server
                port:
                  number: 443
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - argocd.xxx.xxx

kustomize.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argocd

bases:
  - base/argocd.yaml
  - base/argogrpc.yaml
  - https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

resources:
  - ingress-class.yaml
  - ingress.yaml

patchesStrategicMerge:
  - overlays/argocd-server.yaml

base/argocd.yaml is just the basic application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argocd
spec:
  project: default
  source:
    path: argocd
    repoURL: https://github.com/xxxx/xxx.git
    targetRevision: main
  destination:
    namespace: argocd
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      selfHeal: true
      prune: true

@airmonitor
Copy link

Hey Guys,

I'm wondering if any of you were able to use the security group with ALB?

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
    alb.ingress.kubernetes.io/group: shared-alb
    alb.ingress.kubernetes.io/group.name: shared-alb
    alb.ingress.kubernetes.io/certificate-arn: arn-cert
    alb.ingress.kubernetes.io/target-type: "ip"
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
    # alb.ingress.kubernetes.io/security-groups: sg-xxxxxxxxx 
    alb.ingress.kubernetes.io/conditions.argogrpc: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/spec:
  rules:
    - host: domainname
      http:
        paths:
          - backend:
              service:
                name: argogrpc
                port:
                  number: 443
            pathType: ImplementationSpecific
          - backend:
              service:
                name: argocd-server
                port:
                  number: 443
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - domainname

The security group is allowing TCP on 443, but after adding the security group to ALB I'm getting 504 status code.

Sorry if my question don't fit this topic.

@bmohamoodallybmj
Copy link

Hi,

I've never used security-groups. I've always just allowed the controller to dynamically create (and destory) the SGs using the https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#inbound-cidrs annotation.

I don't use this for my Argo Ingress but do use the inbound-cidrs annotation for 99% all of my other Ingress services.

@vumdao
Copy link

vumdao commented Jun 9, 2022

I used AWS Application Load Balancers and the argogrpc targetGroup is unhealthy and the UI returns 502

@arbianqx
Copy link

arbianqx commented Jun 9, 2022

In case anyone else is coming here with the same problem, the solutions for me was to patch the argocd-server to use type: NodePort instead of the default ClusterIP and to make sure my alb-controller was on the latest version

updated files for reference:

overlays/argocd-server.yaml

apiVersion: v1
kind: Service
metadata:
  name: argocd-server
  labels:
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/name: argocd-server
    app.kubernetes.io/component: server
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server

base/argogrpc.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
  labels:
    app: argogrpc
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/name: argogrpc
    app.kubernetes.io/component: server
  name: argogrpc
  namespace: argocd
spec:
  ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server
  type: NodePort

ingress-class.yaml

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: alb-default
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: ingress.k8s.aws/alb

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
    alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
    alb.ingress.kubernetes.io/certificate-arn: xxxxx.xxxx
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: instance
    alb.ingress.kubernetes.io/tags: app=argo
    alb.ingress.kubernetes.io/conditions.argogrpc: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
  name: argocd
  namespace: argocd
spec:
  ingressClassName: alb-default
  rules:
    - host: argocd.xxx.xxx
      http:
        paths:
          - path: /*
            backend:
              service:
                name: argogrpc
                port:
                  number: 443
            pathType: ImplementationSpecific
          - path: /*
            backend:
              service:
                name: argocd-server
                port:
                  number: 443
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - argocd.xxx.xxx

kustomize.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argocd

bases:
  - base/argocd.yaml
  - base/argogrpc.yaml
  - https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

resources:
  - ingress-class.yaml
  - ingress.yaml

patchesStrategicMerge:
  - overlays/argocd-server.yaml

base/argocd.yaml is just the basic application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argocd
spec:
  project: default
  source:
    path: argocd
    repoURL: https://github.com/xxxx/xxx.git
    targetRevision: main
  destination:
    namespace: argocd
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      selfHeal: true
      prune: true

I was encountering this issue, and this helped.

@vumdao
Copy link

vumdao commented Jun 9, 2022

@arbianqx My argogrpc target group is unhealthy, what is the Health check path of your argogrpc target group ?

@lmansilla26
Copy link

Hey guys, for everyone with this problem, the issue comes from the default argocd service, which is a ClusterIP, when the AWS load balancer controller needs a Nodeport to deploy an ALB. Patching or deploying the service as NodePort will make it work.

@SeungsuKim
Copy link

@vumdao Set alb.ingress.kubernetes.io/backend-protocol-version annotation value to GRPC from HTTP2. It will change your target group's protocol to GRPC Kubernetes service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests