-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BASE_HREF
is not working
#3080
Comments
This comment was marked as spam.
This comment was marked as spam.
Hello,
|
This screen can be blank if the workflow in invalid. Check under the summary tab (one of the the small buttons top right) |
@alexec On clicking resubmit from the blank screen, I can see all the steps Now click on workflow again. You can see a blank page Also from cli, I can see workflow is successful: argo get artifact-passing-x8zhr
Name: artifact-passing-x8zhr
Namespace: argo
ServiceAccount: argo-workflow-service-account
Status: Succeeded
Created: Tue Jun 23 21:45:24 +0530 (8 minutes ago)
Started: Tue Jun 23 21:45:24 +0530 (8 minutes ago)
Finished: Tue Jun 23 21:45:35 +0530 (8 minutes ago)
Duration: 11 seconds
STEP TEMPLATE PODNAME DURATION MESSAGE
✔ artifact-passing-x8zhr artifact-example
├---✔ generate-artifact whalesay artifact-passing-x8zhr-982905896 4s
└---✔ consume-artifact print-message artifact-passing-x8zhr-2574371560 5s |
@alexec In ideal case it should load the UI on /argo we shouldn't put extra routes. |
You can get ingress working as follows:
# diff --git a/manifests/base/argo-server/argo-server-deployment.yaml b/manifests/base/argo-server/argo-server-deployment.yaml
index dbafbfd8..3ad77285 100644
--- a/manifests/base/argo-server/argo-server-deployment.yaml
+++ b/manifests/base/argo-server/argo-server-deployment.yaml
@@ -16,6 +16,9 @@ spec:
- name: argo-server
image: argoproj/argocli:latest
args: [server]
+ env:
+ - name: BASE_HREF
+ value: /argo/
ports:
- name: web
containerPort: 2746
diff --git a/manifests/base/argo-server/argo-server-ingress.yaml b/manifests/base/argo-server/argo-server-ingress.yaml
index e69de29b..f4599000 100644
--- a/manifests/base/argo-server/argo-server-ingress.yaml
+++ b/manifests/base/argo-server/argo-server-ingress.yaml
@@ -0,0 +1,16 @@
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: argo-server
+ annotations:
+ ingress.kubernetes.io/rewrite-target: /
+spec:
+ rules:
+ - http:
+ paths:
+ - backend:
+ serviceName: argo-server
+ servicePort: 2746
+ path: /argo
+
diff --git a/manifests/base/argo-server/argo-server-service.yaml b/manifests/base/argo-server/argo-server-service.yaml
index 0c6e58d3..5bdc67ac 100644
--- a/manifests/base/argo-server/argo-server-service.yaml
+++ b/manifests/base/argo-server/argo-server-service.yaml
@@ -9,3 +9,4 @@ spec:
- name: web
port: 2746
targetPort: 2746
+ type: LoadBalancer While needing this in not uncommon for ingresses, it's not also straight forward. |
@kevinsimons-wf, @ramanNarasimhan77, @ematpad, @gordonbondon, @damianoneill, @yuvraj9, and @yk634 - could I please ask you to review the proposed solution and state if it is adequate - or a better solution is needed? |
@alexec I will test this today and comeback with my findings |
@alexec I got it working with this setup
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/alias: "true"
external-dns.alpha.kubernetes.io/target: target.example.com
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: server.example.com
http:
paths:
- backend:
serviceName: argo-server
servicePort: 2746
path: /argo/(.*)
- backend:
serviceName: argo-server
servicePort: 2746
path: /argo
tls:
- hosts:
- server.example.com
secretName: some-cert-manager-seceret |
@alexec for my case the ingress does not seem to be the issue as the main workflow page renders properly. Our cluster has this setup When I deploy argo separately using the manifests I dont see this problem. |
@ramanNarasimhan77 do you have a very large workflow? 1000+ nodes? |
@alexec No, my workflow only has 3 steps. I am not sure what could cause this behavior. But if others are able to use Argo with Base_href set, then I think this ticket can be closed. The issue on my cluster is probably caused by one of the ingress controllers. I will continue my investigation. |
Documentation updated with #4306 as there is no need to change svc argo-server just ingress and Deployment change with BASE_HREF is ok |
you config saved the day. Thanks a lot. I just had to turn off ssl redirect. |
Does anybody has a similar one for AWS LoadBalancer Controller, instead of nginx controller? |
This comment was marked as duplicate.
This comment was marked as duplicate.
For anyone who finds their way to this thread - or runs into this problem - here's a working implementation for a local K3D cluster. It's all laid out flexibly enough that it should be portable/obvious to understand + obvious to modify or adapt to your own workflow/implementation. OverviewK3D is a wrapper and set of conveniences for running a fully-dockerized local cluster on Rancher K3s. - LoadBalancer: Klipper (built-in default LB for K3D/K3s) NotesIn this configuration, there are a few things to be aware of.
AND ONE MORE THING YOU CAN have ArgoCD just treat the included Kustomize example as an Argo App of type Kustomize. In order to do so - and not NEED a configurationManagementPlugin sidecar - please see this configMapGenerator...these are required configMap entries for ArgoCD to be able to properly/correctly build Kustomizations that include Helm charts in just one step (through See here for documentation:
See here for the configMap: https://github.com/armenr/5thK8s/blob/main/dependencies/argo-cd/generators/configmap-argocd-cm.yaml The goodiesExample Repo: https://github.com/armenr/5thK8s K3D config: https://github.com/armenr/5thK8s/blob/main/assets/k3d_local.yaml Traefik Ingress Example: https://github.com/armenr/5thK8s/blob/main/dependencies/argo-cd/patches/traefik-middleware.yaml ---> BIG THANK YOU TO @erkerb4 Helm Values for ArgoCD: https://github.com/armenr/5thK8s/blob/main/dependencies/argo-cd/values.yaml How to Build/Deploy# go to the directory
cd dependencies/argo-cd
# build via kustomize + built-in helm generator
kustomize build --enable-helm . | kubectl apply -n argocd -f -
|
Not sure why the |
@ramanNarasimhan77 We are facing a similar issue with kong ingress. Can you share the resolution you did for this issue. |
@sid8489 I do not have a resolution for this issue. We still have this issue in our argo installation as it is behind a bunch of proxies |
I'm having the same issue, using this file for my configuration. |
After a day of trying various permutations, here is a config that works for me. I am using Reproducible example -
Argo patch file: # riaz@server:~/bin/k8s$ cat deployments/argo-patch.yaml
spec:
template:
spec:
containers:
- name: argo-server
args:
- server
- "--auth-mode=server" # not relevant to reprex
- "--secure=false" # not relevant to reprex
env:
- name: BASE_HREF
value: /argo/
readinessProbe:
httpGet:
scheme: HTTP Traefik ingress: # riaz@server:~/bin/k8s$ cat deployments/argo-dashboard-ingress.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: argo-ingressroute
namespace: argo
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/argo`)
kind: Rule
services:
- name: argo-server
port: 2746
middlewares:
- name: argo-stripprefix
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: argo-stripprefix
namespace: argo
spec:
stripPrefix:
prefixes:
- /argo
forceSlash: true
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.3.9/install.yaml
kubectl patch deployment argo-server --namespace argo --patch-file deployments/argo-patch.yaml
kubectl apply -f deployments/argo-dashboard-ingress.yaml
Hope this helps someone with a similar issue. |
Rough Istio equivalent of the traefik manifests above in case someone needs it. Istio's rewrite here is doing the job of traefik's stripPrefix. apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: argo-gw
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- yoursite.com
port:
name: https
number: 443
protocol: https
tls:
credentialName: argo-tls
mode: SIMPLE
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: argo-https-vs
namespace: argo
spec:
gateways:
- istio-system/argo-gw
hosts:
- yoursite.com
http:
- match:
- uri:
prefix: /argo/
rewrite:
uri: /
route:
- destination:
host: argo-server
port:
number: 2746 |
an updated version that works with k8s 1.27 (at least where I tested): apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argo-server
namespace: argo
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
ingress.kubernetes.io/protocol: https # Traefik
nginx.ingress.kubernetes.io/backend-protocol: https # ingress-nginx
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: argo-server
port:
number: 2746
path: /argo(/|$)(.*)
pathType: ImplementationSpecific
|
I'm using EKS + ALB (with AWS LB Controller v2.7.0). I needed to set the values.yaml server:
baseHref: / ingress.yaml apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: #cert-arn
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
name: argo-server
namespace: workflows #change it to your namespace, where argo-workflows running
spec:
ingressClassName: alb
rules:
- host: your-argo-workflow-url.com
http:
paths:
- backend:
service:
name: argo-server #your argo-server service name created for argo-workflows `kubectl get svc -n workflows`
port:
number: 2746
path: /*
pathType: ImplementationSpecific |
Just to add to @terbepetra above comment. If you are using argo-workflows helm chart 0.41.1 and eks 1.28 you can just do something like this for the aws loadbancer controller server:
ingress:
enabled: true
hosts:
- argo-workflows.yourdomain.com
annotations:
kubernetes.io/ingress.class: alb
external-dns.alpha.kubernetes.io/hostname: argo-workflows.yourdomain.com # if you are using external-dns
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: YOURCERTARN |
I'm facing the same here, the steps I took it: kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.5.8/install.yaml
kubectl set env deployment/argo-server -n argo BASE_HREF=/argo/ My ingress: apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: self-signed
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
name: argo-server-workflow
namespace: argo
spec:
ingressClassName: nginx
rules:
- host: mykubernetes.com
http:
paths:
- backend:
service:
name: argo-server
port:
number: 2746
path: /argo/(/|$)(.*)
pathType: ImplementationSpecific
tls:
- hosts:
- mykubernetes.com
secretName: my-kubernetes-cert But when I try to reach
I'm not sure what I'm doing wrong. Any help will be appreciate! Thank you |
@tbernacchi try removing the extra |
this comment should help - #3080 (comment) |
Cant Access argo UI if I change the
BASE_HREF
to something other than/
I want it to be accessible at xxx.com/argo
To reproduce it you can just change the env variable
BASE_HREF
to something other than /. I have tried these -argo
,/argo
,/argo/
. None of these work.Getting this on the console of my browser -
THis is the image we are using: argoproj/argocli:v2.7.4
Message from the maintainers:
If you are impacted by this bug please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
The text was updated successfully, but these errors were encountered: