Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BASE_HREF is not working #3080

Closed
yuvraj9 opened this issue May 22, 2020 · 36 comments · Fixed by #3592
Closed

BASE_HREF is not working #3080

yuvraj9 opened this issue May 22, 2020 · 36 comments · Fixed by #3592
Assignees
Labels
area/docs Incorrect, missing, or mistakes in docs area/server type/support User support issue - likely not a bug

Comments

@yuvraj9
Copy link

yuvraj9 commented May 22, 2020

Cant Access argo UI if I change the BASE_HREF to something other than /

I want it to be accessible at xxx.com/argo

To reproduce it you can just change the env variable BASE_HREF to something other than /. I have tried these - argo, /argo , /argo/ . None of these work.

Getting this on the console of my browser -

 Uncaught SyntaxError: Unexpected token '<'

THis is the image we are using: argoproj/argocli:v2.7.4

  • Kubernetes version : 1.16

Message from the maintainers:

If you are impacted by this bug please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.

@raohuaming

This comment was marked as spam.

@ramanNarasimhan77
Copy link

ramanNarasimhan77 commented Jun 23, 2020

Hello,
I am also facing a similar issue

BASE_HREF setup

- name: BASE_HREF
   value: /argo/

Ingress setup

rules:
  - host: {{domain-name}}
    http:
      paths:
      - backend:
          serviceName: argo-server
          servicePort: http
        path: /argo(/|$)(.*)

I can see workflows in https://{{domain-name}}/argo/workflows

But when I click on a workflow, I see a blank screen

Screenshot 2020-06-23 at 8 43 46 PM

Under network I can see this

image

@alexec
Copy link
Contributor

alexec commented Jun 23, 2020

I can see workflows in https://{{domain-name}}/argo/workflows

This screen can be blank if the workflow in invalid. Check under the summary tab (one of the the small buttons top right)

@ramanNarasimhan77
Copy link

ramanNarasimhan77 commented Jun 23, 2020

This screen can be blank if the workflow in invalid. Check under the summary tab (one of the the small buttons top right)

@alexec
When i submit a workflow from the ui, i can see the steps
When i go back to main workflows/ page and then click on the workflow, I get a blank screen

On clicking resubmit from the blank screen, I can see all the steps
image

going back to main page
image

Now click on workflow again. You can see a blank page
image

Also from cli, I can see workflow is successful:

argo get artifact-passing-x8zhr
Name:                artifact-passing-x8zhr
Namespace:           argo
ServiceAccount:      argo-workflow-service-account
Status:              Succeeded
Created:             Tue Jun 23 21:45:24 +0530 (8 minutes ago)
Started:             Tue Jun 23 21:45:24 +0530 (8 minutes ago)
Finished:            Tue Jun 23 21:45:35 +0530 (8 minutes ago)
Duration:            11 seconds

STEP                       TEMPLATE          PODNAME                            DURATION  MESSAGE
 ✔ artifact-passing-x8zhr  artifact-example
 ├---✔ generate-artifact   whalesay          artifact-passing-x8zhr-982905896   4s
 └---✔ consume-artifact    print-message     artifact-passing-x8zhr-2574371560  5s

@yuvraj9
Copy link
Author

yuvraj9 commented Jun 26, 2020

@alexec In ideal case it should load the UI on /argo we shouldn't put extra routes.

@alexec alexec self-assigned this Jul 10, 2020
@alexec
Copy link
Contributor

alexec commented Jul 10, 2020

You can get ingress working as follows:

  1. Update service/argo-server spec with type: LoadBalancer
  2. Add BASH_HREF as environment variable to deployment/argo-server .
  3. Create a ingress, with the annotation ingress.kubernetes.io/rewrite-target: /.
# diff --git a/manifests/base/argo-server/argo-server-deployment.yaml b/manifests/base/argo-server/argo-server-deployment.yaml
index dbafbfd8..3ad77285 100644
--- a/manifests/base/argo-server/argo-server-deployment.yaml
+++ b/manifests/base/argo-server/argo-server-deployment.yaml
@@ -16,6 +16,9 @@ spec:
         - name: argo-server
           image: argoproj/argocli:latest
           args: [server]
+          env:
+            - name: BASE_HREF
+              value: /argo/
           ports:
             - name: web
               containerPort: 2746
diff --git a/manifests/base/argo-server/argo-server-ingress.yaml b/manifests/base/argo-server/argo-server-ingress.yaml
index e69de29b..f4599000 100644
--- a/manifests/base/argo-server/argo-server-ingress.yaml
+++ b/manifests/base/argo-server/argo-server-ingress.yaml
@@ -0,0 +1,16 @@
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+  name: argo-server
+  annotations:
+    ingress.kubernetes.io/rewrite-target: /
+spec:
+  rules:
+    - http:
+        paths:
+          - backend:
+              serviceName: argo-server
+              servicePort: 2746
+            path: /argo
+
diff --git a/manifests/base/argo-server/argo-server-service.yaml b/manifests/base/argo-server/argo-server-service.yaml
index 0c6e58d3..5bdc67ac 100644
--- a/manifests/base/argo-server/argo-server-service.yaml
+++ b/manifests/base/argo-server/argo-server-service.yaml
@@ -9,3 +9,4 @@ spec:
     - name: web
       port: 2746
       targetPort: 2746
+  type: LoadBalancer

While needing this in not uncommon for ingresses, it's not also straight forward.

@alexec alexec added the solution/workaround There's a workaround, might not be great, but exists label Jul 10, 2020
@alexec
Copy link
Contributor

alexec commented Jul 10, 2020

argoproj/argo-cd#3475

@alexec
Copy link
Contributor

alexec commented Jul 13, 2020

@kevinsimons-wf, @ramanNarasimhan77, @ematpad, @gordonbondon, @damianoneill, @yuvraj9, and @yk634 - could I please ask you to review the proposed solution and state if it is adequate - or a better solution is needed?

@ramanNarasimhan77
Copy link

@alexec I will test this today and comeback with my findings

@gordonbondon
Copy link

gordonbondon commented Jul 14, 2020

@alexec I got it working with this setup

  1. BASE_HREF set to subpath with trailing slash (trailing slash is an important one because of how <base href> tag works, I guess we can just update docs here https://github.com/argoproj/argo/blob/676868f31da1bce361e89bebfa1eea81471784ac/docs/argo-server.md#base-href from /argo to /argo/ - that's what got me confused at first), or maybe just add a check to always append a slash if it's not present.
  2. No updates to argo Service, just put it behind Ingress with a similar config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/alias: "true"
    external-dns.alpha.kubernetes.io/target: target.example.com
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: HTTP
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
  - host: server.example.com
    http:
      paths:
      - backend:
          serviceName: argo-server
          servicePort: 2746
        path: /argo/(.*)
      - backend:
          serviceName: argo-server
          servicePort: 2746
        path: /argo
  tls:
  - hosts:
    - server.example.com
    secretName: some-cert-manager-seceret

@ramanNarasimhan77
Copy link

@alexec for my case the ingress does not seem to be the issue as the main workflow page renders properly.
However, I see an empty screen when i click on any specific workflow and it takes sometime to load.

Our cluster has this setup
load balancer -> cluster wide ingress controller -> gatekeeper -> application's ingress controller -> argo

When I deploy argo separately using the manifests I dont see this problem.

@alexec
Copy link
Contributor

alexec commented Jul 15, 2020

@ramanNarasimhan77 do you have a very large workflow? 1000+ nodes?

@ramanNarasimhan77
Copy link

ramanNarasimhan77 commented Jul 15, 2020

@ramanNarasimhan77 do you have a very large workflow? 1000+ nodes?

@alexec No, my workflow only has 3 steps. I am not sure what could cause this behavior. But if others are able to use Argo with Base_href set, then I think this ticket can be closed. The issue on my cluster is probably caused by one of the ingress controllers. I will continue my investigation.

@omerfsen
Copy link
Contributor

omerfsen commented Oct 16, 2020

Documentation updated with #4306 as there is no need to change svc argo-server just ingress and Deployment change with BASE_HREF is ok

@kanwalnainsingh
Copy link

kanwalnainsingh commented Feb 25, 2021

@alexec I got it working with this setup
[...]

you config saved the day. Thanks a lot. I just had to turn off ssl redirect.

@bin-chen-techlabs
Copy link

Does anybody has a similar one for AWS LoadBalancer Controller, instead of nginx controller?

@saranyaeu2987

This comment was marked as duplicate.

@armenr
Copy link

armenr commented Dec 29, 2021

For anyone who finds their way to this thread - or runs into this problem - here's a working implementation for a local K3D cluster. It's all laid out flexibly enough that it should be portable/obvious to understand + obvious to modify or adapt to your own workflow/implementation.

Overview

K3D is a wrapper and set of conveniences for running a fully-dockerized local cluster on Rancher K3s.

- LoadBalancer: Klipper (built-in default LB for K3D/K3s)
- Ingress: Traefik
- ArgoCD Install Method: Bitnami Helm Chart (I like it, I know it's not perfect...or from "the source," but it suits our workflows and needs, so that's what I chose).

Notes

In this configuration, there are a few things to be aware of.

  • ArgoCD is configured to listen and serve on http://kubernetes.docker.internal/argocd/ (be sure to add this to your hosts file on the system where you deploy!)
  • SSL/TLS are disabled
  • The ingress targets Traefik's web service, not web-secure (because HTTP)
  • The included Helm example also includes a fully-working setup for a configurationManagementPlugin sidecar which you can use for building your charts & doing "last-mile kustomization" in case you don't want ArgoCD just treating/building your apps as vanilla Kustomize apps

AND ONE MORE THING

YOU CAN have ArgoCD just treat the included Kustomize example as an Argo App of type Kustomize. In order to do so - and not NEED a configurationManagementPlugin sidecar - please see this configMapGenerator...these are required configMap entries for ArgoCD to be able to properly/correctly build Kustomizations that include Helm charts in just one step (through kustomize build --enable-helm .)

See here for documentation:

See here for the configMap: https://github.com/armenr/5thK8s/blob/main/dependencies/argo-cd/generators/configmap-argocd-cm.yaml

The goodies

Example Repo: https://github.com/armenr/5thK8s

K3D config: https://github.com/armenr/5thK8s/blob/main/assets/k3d_local.yaml

Traefik Ingress Example: https://github.com/armenr/5thK8s/blob/main/dependencies/argo-cd/patches/traefik-middleware.yaml ---> BIG THANK YOU TO @erkerb4

Helm Values for ArgoCD: https://github.com/armenr/5thK8s/blob/main/dependencies/argo-cd/values.yaml

How to Build/Deploy

# go to the directory
cd dependencies/argo-cd

# build via kustomize + built-in helm generator
kustomize build --enable-helm . | kubectl apply -n argocd -f -

@lsgrep
Copy link

lsgrep commented Mar 30, 2022

Not sure why the base tag is included in the first place, it breaks so many things. It makes the deployment which usually come behind some auth service etc very difficult. Not everyone can use ingress at will.

@sid8489
Copy link

sid8489 commented Aug 3, 2022

This screen can be blank if the workflow in invalid. Check under the summary tab (one of the the small buttons top right)

@alexec When i submit a workflow from the ui, i can see the steps When i go back to main workflows/ page and then click on the workflow, I get a blank screen
[...]

@ramanNarasimhan77 We are facing a similar issue with kong ingress. Can you share the resolution you did for this issue.

@ramanNarasimhan77
Copy link

@sid8489 I do not have a resolution for this issue. We still have this issue in our argo installation as it is behind a bunch of proxies

@RossComputerGuy
Copy link

I'm having the same issue, using this file for my configuration.

@riazarbi
Copy link

riazarbi commented Sep 2, 2022

After a day of trying various permutations, here is a config that works for me. I am using k3s, version v1.24.4+k3s1, with the default traefik deployment.

Reproducible example -

  1. Create the following files:

Argo patch file:

# riaz@server:~/bin/k8s$ cat deployments/argo-patch.yaml 
spec:
  template:
    spec:
      containers:
      - name: argo-server
        args:
          - server
          - "--auth-mode=server"  # not relevant to reprex
          - "--secure=false"           # not relevant to reprex
        env:
          - name: BASE_HREF
            value: /argo/
        readinessProbe:
          httpGet:
            scheme: HTTP

Traefik ingress:

# riaz@server:~/bin/k8s$ cat deployments/argo-dashboard-ingress.yaml 
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: argo-ingressroute
  namespace: argo
spec:
  entryPoints:
    - web
  routes:
    - match: PathPrefix(`/argo`)
      kind: Rule
      services:
        - name: argo-server
          port: 2746
      middlewares:
        - name: argo-stripprefix
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: argo-stripprefix
  namespace: argo
spec:
  stripPrefix:
    prefixes:
      - /argo
    forceSlash: true
  1. Execute the following commands:
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.3.9/install.yaml
kubectl patch deployment argo-server --namespace argo --patch-file deployments/argo-patch.yaml
kubectl apply -f deployments/argo-dashboard-ingress.yaml
  1. Wait for pods to get into a running state

  2. Visit http://[IP_ADDRESS]/argo/

Hope this helps someone with a similar issue.

@arnoldrw
Copy link

arnoldrw commented Oct 13, 2022

Rough Istio equivalent of the traefik manifests above in case someone needs it. Istio's rewrite here is doing the job of traefik's stripPrefix.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: argo-gw
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - yoursite.com
      port:
        name: https
        number: 443
        protocol: https
      tls:
        credentialName: argo-tls
        mode: SIMPLE
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: argo-https-vs
  namespace: argo
spec:
  gateways:
    - istio-system/argo-gw
  hosts:
    - yoursite.com
  http:
  - match:
    - uri:
        prefix: /argo/
    rewrite:
      uri: /
    route:
    - destination:
        host: argo-server
        port:
          number: 2746

@xavidop
Copy link

xavidop commented Sep 7, 2023

an updated version that works with k8s 1.27 (at least where I tested):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argo-server
  namespace: argo
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    ingress.kubernetes.io/protocol: https # Traefik
    nginx.ingress.kubernetes.io/backend-protocol: https # ingress-nginx
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - backend:
              service:
                name: argo-server
                port:
                  number: 2746
            path: /argo(/|$)(.*)
            pathType: ImplementationSpecific

@agilgur5 agilgur5 added solution/suggested A solution to the bug has been suggested. Someone needs to implement it. area/server area/docs Incorrect, missing, or mistakes in docs type/support User support issue - likely not a bug and removed type/feature Feature request solution/workaround There's a workaround, might not be great, but exists solution/suggested A solution to the bug has been suggested. Someone needs to implement it. labels Sep 8, 2023
@terbepetra
Copy link

terbepetra commented Feb 23, 2024

I'm using EKS + ALB (with AWS LB Controller v2.7.0).
EKS 1.27
Argo-server v3.5.4

I needed to set the BASE_HREF env var to / in the deployment.
If you are using HELM to deploy argo-workflows set the following in the values file:

values.yaml

server:
  baseHref: /

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: #cert-arn
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/target-type: ip
    kubernetes.io/ingress.class: alb
  name: argo-server
  namespace: workflows #change it to your namespace, where argo-workflows running
spec:
  ingressClassName: alb
  rules:
  - host: your-argo-workflow-url.com
    http:
      paths:
      - backend:
          service:
            name: argo-server #your argo-server service name created for argo-workflows `kubectl get svc -n workflows`
            port:
              number: 2746
        path: /*
        pathType: ImplementationSpecific

@agilgur5 agilgur5 changed the title BASE_HREF is not working BASE_HREF is not working Feb 25, 2024
@dmildh-absci
Copy link

dmildh-absci commented Apr 18, 2024

Just to add to @terbepetra above comment. If you are using argo-workflows helm chart 0.41.1 and eks 1.28 you can just do something like this for the aws loadbancer controller

server:
  ingress:
    enabled: true
    hosts: 
      - argo-workflows.yourdomain.com
    annotations:
      kubernetes.io/ingress.class: alb
      external-dns.alpha.kubernetes.io/hostname: argo-workflows.yourdomain.com # if you are using external-dns
      alb.ingress.kubernetes.io/healthcheck-path: /
      alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/scheme: internal
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/certificate-arn: YOURCERTARN

@tbernacchi
Copy link

tbernacchi commented Jul 27, 2024

I'm facing the same here, the steps I took it:

kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.5.8/install.yaml
kubectl set env deployment/argo-server -n argo BASE_HREF=/argo/

My ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: self-signed
    nginx.ingress.kubernetes.io/backend-protocol: HTTP
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  name: argo-server-workflow
  namespace: argo
spec:
  ingressClassName: nginx
  rules:
  - host: mykubernetes.com
    http:
      paths:
      - backend:
          service:
            name: argo-server
            port:
              number: 2746
        path: /argo/(/|$)(.*)
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - mykubernetes.com
    secretName: my-kubernetes-cert

But when I try to reach https://mykubernetes.com/argo/ it gives me 502 Bad Gateway.

~|⇒ curl -k https://mykubernetes.com/argo/
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>

I'm not sure what I'm doing wrong. Any help will be appreciate! Thank you

@yuvraj9
Copy link
Author

yuvraj9 commented Jul 29, 2024

@tbernacchi try removing the extra / in your ingress path config. I think this have been fixed and you should try with removing/adding the / in your uri to test also in your ingress config

@yuvraj9
Copy link
Author

yuvraj9 commented Jul 29, 2024

this comment should help - #3080 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docs Incorrect, missing, or mistakes in docs area/server type/support User support issue - likely not a bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.